Abeba

Senior Fellow in Trustworthy AI @Mozilla | Cognitive science PhD @UCDCompSci @ucddublin | Ethiopian currently in Ireland. She/her

Abeba boosted:

Intervention from @e_farries and @abebab on the Gardai's frothing appetite for what is really just bad technology and over-surveillance.

(Paywalled)

independent.ie/opinion/comment

Abeba boosted:
Ryan Steed (on the job market)ryansteed
2024-01-31

We also find that most published audit work focuses products/models/algorithms — so we echo calls from others to expand audit work to include the entire ecosystem of affected stakeholders.

Diagram of the different scopes of audit studies. Circles depicting Dataset and Algorithm audits are encompassed by a larger circle depicting Model audits, encompassed by a larger circle representing Product audits, encompassed by Ecosystem audits. Outside this diagram are circles depicting "meta-commentary" and "non-audits" — the two other types of papers we surveyed.Bar chart showing the number of audit studies of each type (product/model/algo audit, ecosystem audit, data audit, or meta-commentary), by year. The total number of studies grows each year 2018 and 2022. Product/model/algo. audits are always most common, usually followed by meta-commentary and data audits. Ecosystem audits are the most rare.
Abeba boosted:
Ryan Steed (on the job market)ryansteed
2024-01-31

I learned a lot from this study — some of the most impactful audit work is done academia (e.g. by journalists or regulators) often using very different methods.

Text from Table II of our paper, accessible here https://arxiv.org/abs/2401.14462
Abeba boosted:
Ryan Steed (on the job market)ryansteed
2024-01-31

Excited to share a new paper with @abebab, Victor Ojewale, Briana Vecchione & Deb Raji

We surveyed 300+ AI audit studies from academia, civil society, govt etc. to understand what work is being done + how it relates to impact & accountability.

arxiv.org/abs/2401.14462

Screenshot of arXiv abstract page.

Title: AI auditing: The Broken Bus on the Road to AI Accountability

Authors: Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah Raji

Abstract: One of the most concrete measures to take towards meaningful AI accountability is to consequentially assess and report the systems' performance and impact. However, the practical nature of the "AI audit" ecosystem is muddled and imprecise, making it difficult to work through various concepts and map out the stakeholders involved in the practice. First, we taxonomize current AI audit practices as completed by regulators, law firms, civil society, journalism, academia, consulting agencies. Next, we assess the impact of audits done by stakeholders within each domain. We find that only a subset of AI audit studies translate to desired accountability outcomes. We thus assess and isolate practices necessary for effective AI audit results, articulating the observed connections between AI audit design, methodology and institutional context on its effectiveness as a meaningful mechanism for accountability.
Abeba boosted:
Emtiyaz Khanemtiyaz
2024-01-30

“Japan: Foreign-born residents sue government for alleged racial profiling”

bbc.com/news/world-asia-681375

Abeba boosted:
Pax Ahimsa Gethenfuncrunch@me.dm
2024-01-29

@mekkaokereke

The only kneeling I want to see before a sportsball game is a Black Lives Matter protest.

Abeba boosted:
2024-01-29

Drafted a banging 1000-word piece that links the Post Office scandal to the future harms of AI. It's with the University's press office but if you know any outlets or contacts that might be interested, please let me know.

Abeba boosted:
Timnit Gebru (she/her).timnitGebru@dair-community.social
2024-01-11

"A group of hackers gained access to the backend of an AI chatbot that fast food franchises use to help automate hiring. The researchers were seemingly able to accept or reject specific job applicants, and had access to a wealth of sensitive information on applicants, the fast food franchises, and the AI company itself, called Chattr."
404media.co/hackers-break-into

Abeba boosted:
2024-01-07

So mastodon doesn't do quoted posts. I think I understand the goal of avoiding that feature. People have decided that "dunking" is the overriding usage of quoting and that we should avoid the inherent toxicity.

But quoted posts are also the foundation of true discourse in a medium like this. It is the primary way to start *new* conversation that is an offshoot of an existing conversation rather than a direct response.

I'm finding it hard to do that on here. And the result is I just don't post.

Abeba boosted:
mekka okereke :verified:mekkaokereke@hachyderm.io
2024-01-06

👴🏻How can a person be expected to defend themselves against accusations of plagiarism with only 1 day notice?

👩🏿How many days advanced notice did Claudine Gay get before her story dropped?

Y'all aren't serious. You want different rules for you vs attacking Black women.

But no.

Abeba boosted:
Niall Winters ⁂nwin@mastodon.ie
2024-01-05

I used @abebab et al’s papers in my teaching in an *Education* dept back in Spring 2022. For techies not to have cited it in late 2023 seems… myopic, to put it kindly! I hope they will do the right thing & update their paper ASAP.

Abeba boosted:

Another paper by Abeba Birhane and Vinay Uday Prabhu had already resulted in the Tiny Imagenet dataset being taken down.

ieeexplore.ieee.org/abstract/d

Abeba boosted:

404 Media reports that "Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material" 🧵

However, in 2021, a preprint by @abebab, Vinay Uday Prabhu & Emmanuel Kahembwe found a number issues in the dataset including " troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content."

The preprint can be found here: arxiv.org/abs/2110.01963

404media.co/laion-datasets-rem

Abeba boosted:
2024-01-05

@hipsterelectron i wouldn't be surprised tho if this were true — but another disappointing thing about the @404mediaco coverage is that it at no point mentions the serious and widely known auditing work of LAION by @abebab and colleagues (e.g. recent Neurips paper openreview.net/forum?id=6URyQ9 ); a clear form of bropropriation, only white men and Stanford folks get to shine in this story

Abeba boosted:
2023-11-25

'about if decolonial AI is at all possible '

Chapter 11: It’s incomprehensible: on machine learning and decoloniality
Abeba Birhane
and Zeerak Talat

@abebab

elgaronline.com/edcollchap/boo

2023-11-20

@axbom you're too kind, thank you!

Abeba boosted:
2023-11-20
Excellent keynote by @abebab at #Internetdagarna today here in Sweden. Looking at the chat, many of Sweden's tech geeks are surprised by the messages she is conveying.

Truly appreciate she was provided this space and made time to participate.

I recommend reading this profile of her in Wired:

https://www.wired.com/story/abeba-birhane-ai-datasets/

#AIEthics
Abeba boosted:
2023-11-18

Our work on AI for medium-range weather forecasts is published in @ScienceMagazine today. 🎉 A lot more for us to do as a community in this area; we hope these advances will support the vital weather-dependent decision-making that happens each day. science.org/doi/10.1126/scienc

Abeba boosted:
Olivia Guest · Ολίβια Γκεστolivia@scholar.social
2023-11-02

@samhforbes big thanks to @abebab @RealSexyCyborg @Iris @mariekewoe @KirstieJane and others off this platform... for their feedback, discussions, support 🥰

Abeba boosted:
@pettter@social.accum.sepettter@mastodon.acc.umu.se
2023-11-02

@algorithmwatch There's a bunch of names in there I trust, at least (e.g. @abebab, @vdignum, Nighat Dad). Agree that more representation from civil society would be helpful, however. Are there any particular people you're thinking of?

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst