Talk Tap Talk is a prototype made by Team Houdini with Tap device using Unity3D and Tap’s SDK. This Android app augmented tap’s existing typing function by adding audio output to every character the user tapped, which expands the current use of the tap device.
We discover the problem space when we test the typing function of the Tap device. We use the default map that Tap provided for typing English. The typing function worked. However, quickly we found two fundamental issues with this function:
- When you type, you cannot know what character you just typed without looking at it.
- It is hard for people to delete what has already been tapped.
The first issue is extremely important, especially for people with eye problems. Imagine that you cannot see very clearly or cannot see at all, you wear this device and you type a character, there is no way to know if you have typed the right word you wanted to. That leaves us great improvement space for the typing function for device Tap.
The target user group for this prototype app is people who with eye problems, including people who cannot see at all, people cannot see the things that are very far from the eye, people who cannot see things clearly that are very close to the eye, people who cannot see things on the screen for certain reasons. Those people are the main target user group for this prototype we are making.
Our vision for this prototype is that it can solve the problem raised as stated above, which is to provide a way for people who type with the Tap device a different way to know if the person has typed the right character that he/she wanted. Audio output is a method that we are strongly considering, for that people use voice to pass messages since humans are created.
For the prototype, we designed and developed an Android app using Unity3D and Tap’s SDK. With the problem space and possible ways to solve the problem in mind, we provided the following features that can improve the experience of typing with the Tap device.
To solve the problem of lacking feedback for people who cannot see the context on the screen, we provided two possible ways to communicate with the users about the input that has been typed by them:
- After the user typed each character with the Tap device, there will be audio feedback stating the character that has just been typed
- When the user types whitespace or punctuation, which means the end of a word, the app will give the user an audio output, stating the word that the user has just typed
Tap provides open-source SDKs for multiple platforms, including iOS, Android, Windows, and Unity. With the SDKs, we find it very easy to map the finger combination to the characters users are typing.
We gathered voice feedback for every English letter and map them to the finger combination using this map. So that when the user taps with a certain finger combination, the matching audio will play on the device.
We found that adding voice feedback for typing can improve the typing experience, especially for people who cannot see very clearly. It works exactly as we expected it to be. The audio output augmented the visual feedback from the screen, which makes it easier for people to type correctly with the Tap device.
We found that Tap already provides a solution for typing with voice feedback for both iOS and Android platforms.
Voice feedback can help Tap reach to users who cannot type with common input devices due to reasons including visual injuries. It can also be a good solution when there’s no screen to provide visual feedback.