Smart artefacts and services, such as smartphones, wearables, social media platforms and messaging apps, are primarily designed to be user-friendly. The design of the user interfaces ensures that human interaction with these devices and apps largely takes place on a subconscious level. Sound interaction design aims at lowering a user’s cognitive effort, aiming at intuitive interaction. Because of this, it becomes increasingly difficult for an average human to stay informed on the personal data he or she shares through these smart artefacts and services with commercial and governmental organizations.
This poses a dilemma: by creating user-friendly interfaces, interface designers put their audience at risk in terms of data privacy. As recent developments –like the scandal involving the abuse of Facebook data by Cambridge Analytica– have shown, the level of trust users put into smart artefacts and services increasingly will be influenced by the way these artefacts and services ‘treat’ their users’ data.
So in other words, user friendly design no longer necessarily helps to gain trust in a smart artefact or service. On the contrary: intentionally or unintentionally, user-friendly interfaces sometimes hide the true intentions of the artefact’s maker. This forces a moral choice upon interface designers: do we need to become more transparent about the inner workings of smart artefacts and services? And if so, if a designer seeks to create a more transparent, trustworthy interface for a smart artefact or service, how could we help this designer to achieve this?
Research with the use of cultural probes on the amount of trust humans put in smart artefacts and services and validation of the subsequent findings through an expert in the field of online trust, it has been established that the relationship humans have with smart artefacts and services can grow to be very personal and intimate. Humans experience these devices and the data they hold as part of their personal, familiar world and barely take into account that these devices and services are in fact still under the control of the companies that made them. The outcome of the probes proved, that awareness on personal data privacy can be improved through the use of concrete, analogue visualization techniques. However, merely heightening awareness about the possible risks of the use of a smart artefact or service, may in turn lead to distrust. Therefore, the very design process of a smart artefact or service should take place in a similar transparent way.
By introducing a physical, analogue toolkit to visualize relevant scenarios concerning personal data, a way to open up the design process of such an artefact or service is proposed. By visualizing shared mental models on scenarios concerning the user’s data, this toolkit offers a way to show, discuss, and debate the challenges of the end user’s privacy with relevant experts. In this process, end user representatives will be given the task of assessing the trustworthiness of these scenarios and to provide proposals for interface designs that would give them more insight and control over their personal data. Thus making the design process a lot more transparent, more trustworthy and accountable.