Yes, I would say that is in the problem domain. Establishing trust & ensuring safety are definite challenges for community-based applications - especially communities where people don't already know each other socially.
Exploring the concept of trust/safety would be interesting - though you would want to focus on a particular sector/area (ie. communities connecting strangers) to help constrain it.
From this the following requirements of the intial idea were developed:
- A way to develop trust between strangers
- A way to facilitate this development of trust
The group also decided that the security of the application was a critical feature that required exploring. It was decided that the security aspects of the application be investigated in parallel to the research component. The first stage of development required determining a set of features that would provide "safety" in the context of the applications usage. After studying research by information security experts regarding abuse of systems, the identity component of the application was found to be the key to the safety.
If all users (not the accounts of the users, but the users themselves) of the application could be uniquely identified, then any offence would be tied to them as a person, meaning they would not have anonymity. Research revealed this anonymity was a key cause of abuse. Research was then conducted to determine the best way to do this.
It was found that as part of a 100 points of identification scheme introduced to prevent money laundering in Australia, many financial institutions are required to collect a government ID. This included many fully online sites, trading in currencies such as Bitcoin. This did not deter usage by users. It was also found that other peer to peer applications exist where such a Government ID is required, such as Uber. This positively supported the inclusion of this as a feature for preventing anonymity. Interviews conducted with users during the term showed users would be willing to upload a government ID, as long as the application was secure, and would not disclose this. Comments were made comparing this process to Uber, showing that people are already open to authentication such as this through Uber. This feature was then accepted as an addition. The signup process would require that before an account could perform any functions involving interaction with other users, an ID would need to be uploaded. This could easily be performed from any mobile device with a reasonable resolution camera. Existing solutions already exist that would allow this to automatically be verified. Once identity is positively established. Then an objective rating system can be applied that ensures no abuse will go unpunished. Due to the nature of the application, and the dangers associated, it was decided that any abuse could not be forgiven, and would need to result in a permanent ban.
This resulted in the concept of a TrustMark. This is associated with each user, and is a sign that a user has not committed any offence, and has uploaded a government ID. This would be revoked if any abuse was committed.
The idea of the trust mark was further developed by creating a set of criteria that must be met to ensure that a TrustMark would be retained. These were made as objective as possible to reduce the scope for abuse. The initial criteria decided on were:
- My helper openly offered his trust mark
- My helper tried to help me
- My helper was polite
- My helper did not make any unwanted requests or advances
- My helper respected me and my beliefs
- My helper requested consent before engaging in any activity that involved physical interaction
- My helper did not request I connect with them later
|
In addition, before helpers could connect, the helpee would be required to scan the helper's barcode, which would prove the responder was the correct responder, preventing people attempting to target people using the application. A photo would also be displayed during this process to prevent phone theft.
It was found that people implicitly accepted the security of the application without being aware of the prescence of the transparent security features of the application. When they manifested in a user facing form, such as the "Scan code to verify your helper" interface, people did not find this to reduce accessibility, and understood the security premise.
|
People also indicated that in general there was nothing other than getting to know someone that would promote trust. But, it is sometimes possible from a distance to determine if someone is untrustworthy, through body language.
When probed for trusted people in their lives, and what made them trustworthy, there was mention of qualities such as kindness/caring.
Finally, people indicated that it would be unlikely they would trust a stranger for an interaction such as walking them to their car.
Initial interviews indicated strongly that developing an application such as Moonlight to facilitate trust would not be trivial. From interviews, it was clear that one of the only factors that would actually develop this trust was getting to know a person over a time period greater than the one expected during an app managed interaction. It was decided that the best approach to continue would be to perform in-depth research into literature surrounding trust, to determine what factors could be changed to promote people to trust strangers. The intent of the application as a result of the first round of interviews and discussion was shifted. Initially, the application was specifically catering to safety in situations where people felt unsafe. However, it was discovered that the need for assistance was more general than just assistance with safety.It was decided that a "general assistance" feature should be trialed, that would allow people to request assistance in a 1:1 context, with one helper and one receiver. This was a large shift but would extend the use case of the application outside the very narrow 'emergency' use case.
The motivation behind this was to ensure there would be a reason for 'helpers' to use the application. Helpees have an incentive to use the application to use the application because they receive help. However, a helper may not need to request help, and due to the situations when people feel unsafe (mostly late, and empty areas), it is likely many helpers would not be able to help others. This would prevent the application from gaining traction (reducing its usefulness), as helpers would not be motivated to sign up.
To resolve this issue, by introducing a general assistance section, all users will be exposed to help requests and can request help in a wider context, which would promote general usage. A pre-release could also be conducted to ensure that there would be enough supply to meet demand.
Following the interviews extensive research into trust was conducted. See also building and establishing trust.Research revealed the key components of trust were that the trustor is vulnerable to the trustee, that the trustor thinks well of the trustee in some domain, that the trustor is optimistic the trustee is competent in some respect, and that the trustor is optimistic the trustee will have a certain motive for acting.
This was discussed in the context of Moonlight and key areas for improvement were identified.
- Trust is a byproduct of vulnerability, as such the application cannot completely remove vulnerability if trust is to be developed.
- The motivation (or expected motivation) of the trustee has to be in line with what the trustor wants it to be.
Trust is defined as (although a definition of trust is difficult) a firm reliance on the integrity, ability, or character of a person or thing whereas safety is defined as the condition of being safe; freedom from danger, risk, or injury. By ensuring people are physically safe, the natural formation of trust can be promoted through the application.
It was also realized that a key motivation of using the application for many people would be for social connection. A helper may help someone in order to connect socially, as a helpee may request help for the same reason. Because of this, it was identified that the final condition of trust, that a trustee's motivation is in line with the trustor's expectation, may not always be satisfied.This caused a significant design change in two areas of the application. The first was the connections interface. The connections interface is a feature that allows helper and helpee to reconnect a later time if they felt a connection.
Further research revealed that in a broad sense connections could be broken into four key categories:- Friendship - for example a friend group
- Romantic Relationships - for example a married couple
- Professional Relationships - for example a mentor and mentee / professional networks
- Intellectual / Activity Partners - for example study mates / sports teams / research teams
Although intellectual relationships and activity partners are more situational then dispositionally based, they would allow reconnection on the grounds of similar interests, where friendship would not necessarily need to be considered. For example reconnecting with a helper who offered to help with learning a certain skill.
The idea behind the connections interface is that if people share a mutual type of connection they can reconnect, and they will receive a notification of the connection. If they do not, they will not be notified. This allows people to extend an offer for a connection in the area the person truly wants to reconnect in, without the risk of judgement if this is not reciprocated.There was a debate about the importance of this for the Moonlight application, which was resolved by breaking down the use cases of the application wrt/ people using the application for a social outlet.
Both users are not using the application as a social outlet
Both users expect the other user is using the application in the same context as them, satisfying the final condition of trust. The connections interface will not pose an issue as if they want to reconnect they can, but they are under no pressure to do so.
One user is using for social connection, the other is not
Both users expect the other user is using the application in the same context as them, however, each is truly using it within a different context. The application policy mandates that one user cannot request another user 'reconnect' with them (with the justification of safety), See feature on TrustMark mockup. So although there is a different intent behind the usage, this will not become apparent to the non-social user, so the fourth condition of trust is still satisfied. Finally, on reaching the connection interface, the social context user can still use this outlet, and the non-social user can make a non-pressured decision about how they feel, potentially satisfying the usage intent of the social user.
Both users are using the application as a social outlet
Both users expect the other user is using the application in the same context as them, satisfying the final condition of trust. The connection interface will allow both users to reconnect if desired, potentially satisfying both their usage intents.
In addition. It is possible two social users will be looking for a different type of connection. By offering the connection categories discussed above, it prevents two users reconnecting with different intents, which could create a negative feeling which would be associated with the application.
In user testing conducted with mockups, users responded that they liked the idea of the feature, however, felt uncomfortable with the terminology used for 'romantic relationships'. Female users expressed that this made the application feel 'sleazy', and that it made them uncomfortable.It was identified that the reason behind this was the terminology used itself. A variety of different terms were trialed, and finally, it was settled that 'Felt a deeper connection' did not have negative connotations, and did not elicit a negative response.
High fidelity testing with this change implemented showed that the reception of the feature was very positive, and the previous reservations about the terminology had been resolved.
|
However, due to the social stigmas associated with engaging conversations with stranger's this is not always possible. This application could provide a judgement-free environment to facilitate this, where a positive and judgement-free environment could be enforced through the TrustMark. This would provide a unique opportunity to bundle the utility of general assistance, along with the potential benefits of social assistance (feeling like there is always someone there, no matter what type of help is required).
This feature manifested in the 'Hello' feature set. The helo feature set would function in a similar way to help requests, although would only match a user with other people in the area also using the Hello feature. This would be opt in however, with the possibility of subscribing to these events even if you were not using the feature itself directly.
|
At this stage many of the earlier worries and pain points had already been removed. However, there was still refinement. One particular area of concern expressed by many users was the assistance request "same sex / same nationality" option, which would allow users to request help from someone of the same gender or same nationality. Users stated that they felt uncomfortable with this in the assistance screen. However, users did state that for the 'Helo' / social request, this would be a useful feature.
|