Affordances and AI

Affordances are the potential actions that a product makes possible.

A chair affords sitting. Clearly. It also affords standing on it, waking from Inception-dreams and lion-taming. These potential actions are easily discoverable only if they are clearly perceivable to the user.

This creates an interesting puzzle for designing AI products. The 'magic' of AI is to outsource decision-making and automate it away.

How then is a user to know what the product can do?

Screenshot 2020-07-03 at 09.41.38.png

Typically we rely on two sources for discovering affordances:

  • signifiers such as labels, sounds, and shape.

  • convention - understanding how a product ‘should’ be used.

Signalling the affordance of AI features is especially important today. As AI products are relatively new - signifiers increase the likelihood that the AI capability is used. Importantly, they also ensure that the AI capability is not accidentally triggered.

As we emerge into a world with increasing AI capabilities infused into previously 'dumb' products, we are faced with a further challenge. How to balance the convenience of 'invisibility' and explicitly making clear what the system can do.

The tendency is to squeeze the most possible convenience from the AI capability. It is very tempting to automate away those human interactions. However, our existing AI systems introduce errors that make this approach risky. Imagine a chair that crumbled 5% of the time you sat on it. I have one. It’s not enjoyable.

For systems with even moderate error rates (i.e. most systems for the foreseeable future) this balance should lean heavily toward explicit signalling.

Screenshot 2020-07-03 at 09.44.56.png

Conversely, false affordances present another difficulty. Introducing AI into a product sets expectations of what is possible with the product. Siri invites voice interactions, yet try to speak to her like a human and you’ll quickly find yourself frustrated.

We have entered an age where AI products and features are gaining mainstream adoption. Conventions and best-practice in the use of AI are beginning to emerge. The decisions that designers collectively take now will shape how Human-AI interaction proceeds for the coming years.

Err on the side of explicit instruction until we’ve learned what can reasonably be expected of AI. Or until such time, that the AI can confidently adapt to our mistakes.

— — —

Previous
Previous

Some data’s more equal than others

Next
Next

Doubt and Double-loop Learning