Can’t make me!

Possibly not everyone has this immediate reaction to software telling them to do something, like SIGN UP NOW or TRY AI or USE OUR CHATBOT.

I do. My day is full of ignoring that reaction and also learning every tip and trick possible to avoid being alerted when I don’t want it.

I don’t feel heard. I don’t feel respected. I feel pestered.

What I want from software

Like many people, I have a constant relationship with software. I work with it, I use it to communicate, I use it to relax and to do the things that matter. And like many people who work in technology, I have a suspicion of it. I bought the dumbest sewing machine I could, and the dumbest printer. My home automation consists entirely of decorative lighting. I want buttons in my cars and firewalls on my TV. I don’t hate and fear modern advancements just for being new, but I do want to know who is actually benefitting from them.

I want to be able to make meaningful choices. I want to give consent.

Not the kind of consent we all do where we click through the legalese, but the kind where I can configure what information goes out about me, and what information I get. I want to be able to decline or revert upgrades I hate. I want to be able to turn off features, or turn them on.

Consent and optionality in software

As we were going through the peer reviews of the Progressive Delivery book (preorder here), we discussed how the biggest benefit to users is that there is room for them to consent. In our conception, software has three separate phases of being finished. Developers handle creation and deployment. Operations and product teams handle release. And users adopt the software. Until it is working for users, software isn’t really done, is it? When Microsoft was training people in ring deployment, they said “Software is not finished until it is in production and returning metrics”.

In the Progressive Delivery world, software is built with enough optionality to let people consent. We have lots of examples of this, and some counter-examples, sometimes from the same companies. For example, in Microsoft Word, you can turn the spellchecker and the grammar checker on and off independently. The grammar checker and I do not agree. That’s consent. I keep the part that’s useful to me and turn off the part that wants me to write really boring sentences. The same company, however, won’t let me opt out of having my documents scanned for AI purposes, and can’t even answer straightforward questions about what is happening to the things I write. I don’t like it, and it’s a dealbreaker in some organizations and professions. There’s no technical reason I can see that makes opt-out impossible. Microsoft would just have to care about my consent.

How not to get stuck

If a bunch of people opt out of a feature, especially one that depends on wide adoption to be useful, that’s a signal. It says that whatever benefit the user is getting from the exchange does not seem like enough, to them, to use the feature. Either you need to make the feature better, or explain the benefit better, or accept that it’s not popular. Forcing people to use something does not give you any data on whether it’s useful. It makes them distrust you, and whether you understand them.

Some of you are out there wondering about sunsetting. How do avoid having to maintain software for 15 years? Some people never want to move off what they’re using. This is a negotiation between how much they want to stay and how much you want them to move. It could be that they’re willing to pay 7 figures to stay on that old version of Java, for reasons having to do with the rest of their stack. It could be that you just stop supporting something at end-of-life and they deal with the consequences. It could be that your new version is a problem for some reason you can’t see.

I’m not saying you have to maintain everything you ever build forever, but you can at least let people choose what they do or don’t want to use. If your new feature is not attractive, maybe go ask the people who turned it off why they don’t want it, instead of assuming they don’t know their own lives.

Conclusion

I growl at the quantified health device that I bought and wear on purpose when it reminds me to stretch and go to bed earlier. It’s possible that I’m an unusually oppositional person. But I did buy it, and I do wear it, because it gives me data I value, and it’s worth the tradeoff. I consent to being nagged about my bedtime.

If you’re making software, does it work to get consent from users, or does it assume it knows best?