#jobtech

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2026-02-04

Wage Expectations and Job Search d.repec.org/n?u=RePEc:ajk:ajkd
"While average misperceptions are relatively small, substantial shares of job seekers display pronounced optimism or pessimism.
… Treated job seekers who were initially strongly optimistic increase their search effort and find jobs more quickly. Conversely, initial pessimists narrow the geographic scope of their search in response to the treatment, which accelerates re-employment—consistent with mitigated spatial search frictions.
… accounting for job seekers’ subjective beliefs is essential when studying search behavior
… suggest that job seekers seem to jointly determine multiple dimensions of their search strategy—including their wage demands, search intensity, and geographic scope. Exogenous changes in one domain can spill over into others
… Both initially optimistic and initially pessimistic job seekers find employment more quickly when holding more accurate beliefs."
#LaborMarkets #jobtech #wageTransparency

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2026-01-16

The Trust Equation: It’s Not Just Who You Hire, It’s How You Hire behavioralscientist.org/the-tr
"Talent represents the most valuable asset of any firm, and candidates evaluate employers as rigorously as vice versa. #AI threatens to further depersonalize human interactions. To thrive in an era that threatens to erode human interactions, organizations must create consistently valuable experiences.

The competitive advantage isn’t in fighting harder in the “war for talent” but in building systems that cultivate #trust, performance, and, with it, an employer brand at scale. Every organization claims to put people first. The ones that succeed are those whose processes prove it."
#LaborMarkets #jobtech #hiring

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-10-21

#Signaling in the Age of AI: Evidence from Cover Letters d.repec.org/n?u=RePEc:arx:pape
"While #AI tools allow freelancers to produce more polished and tailored applications with less effort, our findings suggest that they fundamentally reshape how employers interpret cover letters. The widespread adoption of AI-assisted writing diminishes the informational value of cover letters, weakening their role as a hiring signal.

Workers with weaker pre-AI writing skills saw larger improvements in cover letters, indicating that AI substitutes for workers’ own skills. Although only a minority of applications used the tool, the overall correlation between cover letter tailoring and callbacks fell by 51%, implying that cover letters became less informative signals of worker ability in the age of AI."
#LaborMarkets #jobtech

Newsrampnewsramp
2025-10-16

JobWarmup launches AI-powered interview platform revolutionizing job search preparation and recruitment screening. Empowering candidates and employers with intelligent, data-driven solutions.

Newsramp Image
Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-10-07

Mind the Gap: Gender-based Differences in Occupational Embeddings
aclanthology.org/2025.gebnlp-1
"Across five state-of-the-art multilingual models and seven reference-set configurations, up to 82% of gendered pairs received divergent Top-5 suggestions. These differences involved distinct occupational codes that sometimes crossed major #KldB group

.…gendered job titles—such as Autor vs. Autorin —often lead to different occupation codes, despite having identical meanings. Our findings underscore the importance of grounding #NLP innovations in language-specific sociolinguistic knowledge. Without rigorous attention to linguistic structure and social context, these tools risk perpetuating systemic biases—particularly in settings where semantic equivalence is masked by morphological variation. Addressing such challenges is crucial not only for the technical refinement of NLP systems, but for ensuring that their real-world applications advance rather than hinder equity"
#jobtech #gender #discrimination #LaborEconomics #llm

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-10-07

Measuring Gender Bias in Job Title Matching for Grammatical Gender Languages
arxiv.org/pdf/2509.13803
"… propose a methodology to measure gender bias in a high-impact #NLP application in the human resources domain: job title matching. Using an existing test set in English for this task, we have generated gender-annotated analogous corpora in four languages with grammatical gender, and addressed the evaluation of #genderBias as ranking comparison controlling for gender. Additionally, we establish baselines and confirm that this type of bias already exists in out-of-the-box pre-trained models, which are often used as the core for developing job title matching applications.

Finding a trade-off between model performance and #gender #bias is an important issue to address when developing and selecting job matching models for deployment. On the one hand, choosing a model with apparent good performance but that in turn shows a considerable gender gap may not only be ethically questionable, but it may also result in reputation and even legal consequences on the company responsible for it."
#llm #jobtech #discrimination #LaborEconomics

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-09-24

People are using ChatGPT to write their applications; HR is using AI to read them; no one is getting hired.
theatlantic.com/ideas/archive/
"Online #hiring platforms have made it easier to find an opening but harder to secure one: Applicants send out thousands of AI-crafted résumés, and businesses use #AI to sift through them. What Bumble and Hinge did to the dating market, contemporary human-resources practices have done to the job market. People are swiping like crazy and getting nothing back.

…recommends old-fashioned networking: asking recruiters out for coffee, going to in-person job events, and surveying friends and former employers for leads."
#jobTech #LaborMarkets

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-09-24

Companies are rethinking online job applications, seeking quality over quantity
archive.ph/Vn52u#selection-559
"Companies fed up with the low-quality, sometimes fraudulent submissions that flood applicant-tracking systems are reaching back in time for hard-to-hack recruiting methods. Classified ads are just one tack.
Others include: leaning harder on references; making application forms so cumbersome that only serious candidates will complete them; and posting openings on niche job boards instead of the most popular ones.

… All these tools for applicants to get seen are backfiring, forcing me to go to longer and longer lengths to filter out the noise and #AI fraud,"

#jobTech #LaborMarkets #classifieds

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-07-25

Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models arxiv.org/pdf/2506.10491
"… the estimation of socio-economic parameters shows substantially more bias than subject-based benchmarking. Furthermore, such a setup is closer to a real conversation with an AI assistant. In the era of memory-based AI assistants, the risk of persona-based #LLM bias becomes fundamental. Therefore, we highlight the need for proper debiasing method development and suggest pay gap as one of reliable measures of bias in LLMs
… various forms of #biases when salaries for women are substantially lower than for men, as well as drops in salary values for people of color and of Hispanic origin. In the migrant type category, expatriate salaries tend to be larger, while salaries for refugees are mostly low"

Surprise! These LLMs just replicate the empirical observations of #wages including any wage gaps that may be the result of discrimination that were part of their training data as salary recommendations. These cannot be proper recommendations, of course, they are just a stochastic auto-complete. The biases are real. But you will need tailor-made salary models to generate proper, unbiased salary benchmarks. A #llm is not enough.
#jobtech #LaborMarkets

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-07-23

The résumé is dying, and AI is holding the smoking gun arstechnica.com/ai/2025/06/the
"Some candidates are now taking automation even further, paying for #AI agents that autonomously find jobs and submit applications on their behalf.
… Recruiters report that many of the résumés look suspiciously similar, making it more difficult to identify genuinely qualified or interested candidates.
… Beyond volume, fraud poses an increasing threat
… The frustration has reached a point where AI companies themselves are backing away from their own technology during the #hiring process
… Even when AI screening tools work as intended, they exhibit similar #biases to human recruiters, preferring white male names on résumés—raising legal concerns about #discrimination"
#jobtech #LaborMarkets

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-06-21

Algorithmic Hiring and Diversity: Reducing Human-Algorithm Similarity for Better Outcomes arxiv.org/pdf/2505.14388
"… achieving gender parity at the shortlist stage does not inherently guarantee #gender parity in final hires, even if hiring managers are gender-unbiased.
… the effectiveness of the equal selection constraint is highly job-specific, driven by the correlation between screener and hiring manager evaluations. Notably, technical roles requiring measurable “hard skills” (e.g., software engineering) tend to exhibit higher correlations, diminishing the effectiveness of equal selection precisely in fields where women are most underrepresented.
… equal predictive accuracy of screening algorithms across genders is insufficient in multistage hiring processes. It is equally important for screening algorithms to maintain gender neutrality concerning their alignment with #hiring managers’ criteria—specifically, algorithms should exhibit no gender differences in their correlation with managerial assessments.
… higher correlations between screeners’ and hiring managers’ assessments not only reduce the effectiveness of equal selection constraints but also negatively affect the expected quality of hires. This suggests a critical design insight: screening algorithms should be constructed to complement, rather than replicate, managerial evaluations."
#LaborMarkets #jobtech #bias

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-06-02

Behavioral Measures Improve AI Hiring: A Field Experiment d.repec.org/n?u=RePEc:rco:dpap
"… suggest that survey-based behavioral measures markedly improve the predictions of a random-forest algorithm trained to predict productivity within sample relative to demographic information alone."

It's a pity that the authors do not give the more traditional probit model as much attention as their fancy "#AI", a random forrest model. They spend a lot of effort to find a good random forrest model with cross validation. But it is pitted against a simple probit model where they didn't even try to include interaction effects according to their description. Now, what is the computational cost of the cross validated random forrest model compared to a well crafted probit model? Of course you can do automated feature and feature interteraction selection with probit models, too. There is no reason to dismiss the probit model in such an unfair comparison.
#jobtech #LaborEconomics #ML

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-05-31

Who Gets the Callback? Generative AI and Gender Bias d.repec.org/n?u=RePEc:arx:pape
"… most #llm models reproduce stereotypical gender associations and systematically recommend equally qualified women for lower-wage roles, indicating occupational segregation.
… These biases stem from entrenched gender patterns in the training data as well as from an agreeableness bias induced during the reinforcement learning from human feedback stage
.…AI-driven hiring may perpetuate biases in the labor market and have implications for #fairness and diversity within firms"
#AI #jobtech #ExperimentalEcon #LaborEconomics #discrimination #bias

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-04-29

A Bridge Too Far: Signalling Effects of Artificial Intelligence Evaluation of Job Interviews d.repec.org/n?u=RePEc:hal:jour
"…investigate whether AI evaluation is interpreted as a positive (high innovativeness) or negative (low people orientation) signal by the job applicant
#AI evaluation is interpreted more strongly as a signal of how the organisation treats people rather than of how innovative it is.
… removing humans from the selection process appears to be a ‘bridge too far', when it comes to technological advances in the selection process."
#LaborMarkets #jobtech

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2024-11-22

Companies have touted new #AI technology that allows users to apply to thousands of jobs per day, flooding firms with résumés nbcnews.com/tech/innovation/ai
"…who screens and hires job applicants for a living, emphasizes the need to use AI cautiously to helpand not harm—your chances at getting noticed by a company …recommends that applicants always review generated responses, noting that “AI misses things, too.”
Same is true for the firm!
#jobtech #LaborMarkets

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2024-11-08

…to find the best workers, firms must balance exploitation (selecting from groups with proven track records) with exploration (selecting from under-represented groups to learn about quality)
…modern #hiring algorithms are designed solely for exploitation
…algorithm that values exploration… improves the quality of candidates selected for an interview, while also increasing demographic diversity
arxiv.org/pdf/2411.03616
#LaborMarkets #jobtech

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2024-05-21

Recovering Overlooked Information in Categorical Variables with LLMs: An Application to Labor Market Mismatch
web.sas.upenn.edu/hfang/files/
"…#LLM's match quality measure is positively correlated with traditional measures
…when gender information is disclosed to the #LLM, the model deems females better suited for traditionally female-dominated roles"
#LaborMarkets
#discrimination
#matching
#jobTech
#AIEthics

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2023-08-30

Mitigating Age Biases in Resume Screening AI Models
"…trained an #AI model & applied #bias correction techniques …to correct for biases based on race, gender, & age. We analyzed the effectiveness of these tools in mitigating different types of bias in job #hiring algorithms, explored why age may be more challenging to eliminate than other forms of bias"
journals.flvc.org/FLAIRS/artic
Interesting paper with a generous reference to economicscience.net/publicatio
#Discrimination #ageism
#JobTech #LaborMarkets

Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2023-06-16

Does Artificial Intelligence Help or Hurt Gender Diversity? d.repec.org/n?u=RePEc:mos:mosw
the use of #AI in recruitment changes the gender distribution of potential hires, in some cases more than doubling the fraction of top applicants that are women. This change is generated by better outcomes for women in both supply and demand.
#jobtech #ExperimentalEcon #Discrimination

neoaisac 🏳️‍🌈🇪🇺🇨🇦🇪🇸🇬🇧neoaisac@tech.lgbt
2023-02-06

My days at work usually are spent in way too many #meetings nowadays.

When I took this role I knew where I was getting into. Managing two domains and multiple #teams meant more meetings.

I am a #developer at heart, and I love #coding. But I also love to see things done. Small things need developers, but big things need teams. So I also love to #empower the #people I work with.

So when in a meeting-packed day a colleague from other department comes to ask about a #pullrequest on an old internal library I developed that now needs some updates, I get all the feels. And I also see two things:

One, when I was hands-on I created some meaningful things that are actually used and useful to some of my peers and that even today mean something.

And two, it's now others who keep making that work relevant to today. And that is good.

Seeing things done does not necessarily mean doing them. It also means empowering others to keep doing them.

I love my job. #jobtech

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst