Google Developers Blog: Text Embedding Models Contain Bias. Here's Why That Matters.

amarashar's bookmarks 2018-04-23


Given a trained text embedding model, we can directly measure the associations the model has between words or phrases. Many of these associations are expected and are helpful for natural language tasks. However, some associations may be problematic or hurtful. For example, the ground-breaking paper by Bolukbasi et al. [4] found that the vector-relationship between "man" and "woman" was similar to the relationship between "physician" and "registered nurse" or "shopkeeper" and "housewife" in the popular publicly-available word2vec embedding trained on Google News text.


From feeds:

Ethics/Gov of AI ยป amarashar's bookmarks



Date tagged:

04/23/2018, 12:59

Date published:

04/23/2018, 08:59