When presented with a pair of participants, one gay and one straight, both chosen randomly, the model could correctly distinguish between them 81% of the time for men and 74% of the time for women.The percentage rises to 91% for men and 83% for women when the software reviewed five images per person.
In both cases, it far outperformed human judges, who were able to make an accurate guess only 61% of the time for men and 54% for women. Besides the ethical issue of mining of data from public dating websites, the study immediately raises questions of representation and labelling.
First of all, the report didn't look at any non-white individuals, it assumed there were only two sexual orientations — gay and straight — and does not address bisexual individuals.
Researchers then tend to draw conclusions and train systems only on those faces, and the study "often doesn't transfer at all to people whose appearances may be different," Cook says.
"By only including photos of white people, the research is not only not universally applicable, but also completely overlooks who will face the gravest threat from this application of facial recognition, as LGBT people living under repressive regimes are most likely to be people of colour," Polatin-Reuben adds.
One can naturally see how that's hardly representative of the LGBTQ community.