• fartsparkles@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    You’re not wrong. Research into models trained on racially balanced datasets has shown better recognition performance among with reduced biases. This was in limited and GAN generated faces so it still needs to be recreated with real-world data but it shows promise that balancing training data should reduce bias.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      9
      ·
      3 months ago

      Yeah but this is (basically) reddit and clearly it isn’t racism and is just a problem of multi megapixel cameras not being sufficient to properly handle the needs of phrenology.

      There is definitely some truth to needing to tweak how feature points (?) are computed and the like. But yeah, training data goes a long way and this is why there was a really big push to get better training data sets out there… until we all realized those would predominantly be used by corporations and that people don’t really want to be the next Lenna because they let some kid take a picture of them for extra credit during an undergrad course.