Facebook’s Eyebrow-Raising Image Descriptions
Today, Facebook is experiencing a widespread server issue that is preventing uncached images from fully loading across the platform.
Look — server outages happen, even for global heavyweights like Facebook. That’s not particularly interesting or surprising and only presents a minor inconvenience.
What’s more interesting, to me, is what the outage revealed: Facebook has been labeling our images with surprisingly accurate AI-driven descriptions.
It’s reasonable to assume that Facebook does everything for two reasons: a good reason and the real reason. The tagging of these images seems to be no different:
- The Good Reason: Accurately descriptive alt tags for images are an essential tool for helping facilitate web accessibility for the blind. Indeed, Facebook's image descriptions are consistent with WCAG 2.0 Guideline 1.1.1: "All non-text content that is presented to the user has a text alternative that serves the equivalent purpose."
- The Real Reason: The machine learning and artificial intelligence required to power this kind of accurate, automatic image description generation at scale are significant. Seemingly in the pursuit of this innocuous accessibility goal, Facebook has leveraged our images to train it’s machine learning models.
I wonder how the insights from this undertaking have, for example, affected the AI that underlies Facebook Portal’s “smart camera.”
I’d be curious to learn how long Facebook has been tagging images this way, and how they trained the algorithms that generate the descriptions. I only noticed them today, but I suspect they’ve been in place for some time.