5. Conclusion

This brings us to the conclusion of this thesis, where we seek to find an answer to the question we asked in the introduction of this thesis: How can we ensure interface designers have the means to create interfaces for smart artefacts and services that reflect the use of personal data in a better way and therefore become more trustworthy and transparent?

5.1 Aiming at a broader view

As we have observed in the design process of the participatory design toolkit, designing to improve trust in the interface of a smart artefact or service means that the design process should become more trustworthy. All elements concerning trust in that smart artefact should be observed and assessed. We cannot simply focus on the design of the interface alone, as this interface is too easily manipulated in communicating one thing, but doing the other. Enabling end users to participate in a design process while being informed and educated on all important elements and actors concerning that smart artefact, will however provide transparency and thus a more trustworthy design process.

5.2 A Shared mental model

For this to succeed, it is paramount to provide everyone involved with means to create a visual shared mental model. Only if there is a concrete, visual shared mental model present, a balanced exchange on the possible implications of that model can take place between participants. Apart from that – based upon suggestions made at NEDAP – recordings of the provided scenarios can afterwards be used to provide a form of visual accountability and in turn provide transparency on the design process.

Participatory design through a visual shared mental model enables all relevant stakeholders to form roughly the same interpretation on what a certain product or service should and shouldn’t do. However, designing with relevant experts and end users, facilitating such a design process is not easy. Limitations and rules are needed within the creation of such a shared mental model and the way a participatory design process is led. These can be limitations in the form of design principles, time limits, or project scope. It turns out that a discussion on privacy issues easily gets out of hand and that will make the design process drag on for much too long.

5.3 Designers only?

Uneducated and uninformed users can only put trust in smart artefacts and services if they are provided with means to gain insight in the Context, Construction, Codification and Curation, through a model that they can understand: a visual, shared mental model. This mental model can be recorded and used afterwards to provide accountability on the design process. If accountability is provided in a transparent way, other end users can put trust in the product.

The toolkit in its present form is not yet ready to use and it still remains to be seen which approach is best to choose next. One of the plans is to team up with legal experts to explore the possibilities of converting the tool into a Privacy Impact Assessment-kit for designers. While GDPR will come into effect in a few months from now, we only can provide a more trustworthy design process for interfaces of smart artefacts and services, if we take into account that end user representatives will need to participate in such a process and that all C’s (Keymolen, 2016) are to be taken into account. This would be useful for interface designers, but for business analysts and digital strategists as well. For designing towards data privacy, it is paramount the discussion on privacy and trust will take place as in the design process early as possible.

5.4 Not a silver bullet

Apart from that, it is very easy to lose sight of one of the four elements as Keymolen proposed. Especially designers tend to focus on the element that is of most interest to them: Context, while the other elements may remain largely untouched. But the same goes for specialists that focus on Construction or Curation. A balanced and transparent analysis on all elements of the smart artefact is mandatory to be able to say something on the trustworthiness of that artefact.

When it comes to focusing on legislation: We also cannot expect that Codification in the form of national or European law provide all solutions. Eventually, we do need to comply to the GDPR, but ‘passing’ a Privacy Impact Assessment does not offer any guarantees for trustworthiness. The Volkswagen Diesel scandal has shown us just that: Volkswagen complied to the tests, but not to regulations. We can acknowledge that when it comes to trust, this did not do Volkswagen any good.

It needs to be stressed that by no means, this toolkit and the reasoning behind it has to be seen as a definite solution to possible (data) privacy-issues concerning smart artefacts and services. What is does address however, is the narrow scope interface designers mostly have in the design process of a smart artefact or service: they tend to focus on the Context, the interface and its functionality. However, when designing such an artefact or service, a more holistic approach is necessary in order to address possible problems concerning the contextual integrity of data subjects. It is exactly this holistic approach the toolkit facilitates and therefore, we can conclude that the use of participatory design combined with a clear, well constructed shared mental model, can and may help to bring all relevant stakeholders to the table in order to start a trustworthy design process, which in turn will provide trustworthy products with trustworthy interfaces.

Leeuwarden, April 2018