A modality represents a single sensory input or output channel between a human and a computer. Modern digital devices use various modalities to offer interaction to their users. For instance, a computer utilizes a mouse, keyboard, and the General user Interface to facilitate interaction with humans. More advanced devices use unique input modalities such as sounds, gestures, sounds, and eye gaze. This paper analyzes the modalities and interaction principles of a screen-based physical product. The product of choice is a smartphone, specifically the Google Pixel 4L, which was released in the last quarter of 2019. The phone presents multiple modalities of interaction, including audio, vision, and touch.
At the outset, the Pixel 4L contains multiple modalities to accord the user with as much interaction as possible. The primary interaction between the device and its users is via touch. The touch mobility is executed via the phone’s touchscreen. Hence, according to the principles of design the following apply:
Read also Multi-Touch Screens and Mouse-Driven Screens – Comparison Essay
The screen serves as both an input and output device and layered directly atop an electronic visual display of the phone’s information processing system. Hence, the phone relies on a blend of touch and visual modalities to deliver its users’ services. The touch aspect allows the user to enter commands and instruct the device to perform a specified function. The user gives input by touching the screen’s surface at the point where the link to the service or application is displayed. Touch inputs are specifically entered through simple or multiple input gestures by touching the screen with one or more fingers. The Pixel 4L allows users to use voice commands. For voice commands, the phone has a microphone that collects acoustic data and feeds to the appropriate application. For instance, when the user wants to initiate communication with another user through telecommunication protocols, they must start the ‘phone call’ application and dial the ID of the device to be called using the touchscreen interface and then respond through the microphone. Therefore, the microphone acts as a data collection device.
The Google Pixel L comes with other sensors for enhancing interaction in addition to the microphone, touchscreen display, speaker, and microphone. They include a gyroscope, accelerometer, proximity sensor, compass, barometer, and camera. The gyroscope works hand in hand with the accelerometer to detect the orientation of the phone. The gyroscope adds dimension to the data that the accelerometer supplies by tracking twist or rotation. The proximity sensor, on the other hand, detects how close on an object is to the device. It is very convenient in cases where the user is holding up the phone next to the ear during a voice call. By detecting how far or close the user is to the phone, the information processing system can switch off the touchscreen to avoid accidental touch. The digital compass works by tracking orientation in relation to the earth’s magnetic field. Digital compasses like the one installed in the Google Pixel 4L have a magnetometer on board to track direction. The barometer assists the location tracking chip to deliver altitude data. At the same time, the camera is designed to record visual data and gestures.
The screen has an “ON” state and an “OFF state. In the “on” state, the screen displays the current state of device, including current application links, battery state, and notifications. The state of the phone can be accessed through the device’s settings or by pressing the volume and power buttons on the side.
The feedback is delivered through the display of appropriate messages or execution of the specified commands. Feedback is also delivered through haptic feedback and audio through the phone’s speaker. The touch modality for the Pixel 4L is combined with a haptic response system to provide multimodal capabilities. Like other smartphones, the Pixel 4L utilizes an eccentric mass motor to send vibrotactile feedback from the information processing system. Basically, the device provides vibratory feedback when the user taps a button on the touchscreen. Haptics enhances the user’s experience by delivering simulated tactic feedback and are devised to react immediately to partly counter latency of the display system. Haptics is best known to reinforce interaction between the user and the device, leading to more immersive experiences on the side of the users. Research from the University of Glasgow shows that the combination of the touchscreen and haptics helps users reduce errors, increase input speed, and reduce their cognitive load (Brasel & Gips, 2014). Hence, the inclusion of haptics in the Google Pixel 4L is meant to enhance the interface with the user. Since haptics interacts with the sense of touch, they are closely related to the ‘touch’ aspect. Haptic feedback is advantageous in that it can give stimuli to visually impaired users, especially in situations necessitating navigation. Various vibration stimuli can be used depending on the instance in question. Most of the vibrotactile information transmitted encompasses basic information such as alerts.
Apart from visual and tactile modalities, the Pixel 4L incorporates audio feedback to enable the use of voice commands and retraction of feedback through audio means.The phone’s speaker transmits audio data that is perceptible to the user. A common application in the Pixel 4L that relies on audio modality is the Google Assistant’ voice robot.’ The application records voice commands from the user and provides real-time feedback. Indeed, the user can navigate the internet and read articles without using the screen at all. The integration algorithm of the Google Assistant is powerful enough to process voice inputs and integrate them throughout the interaction session. Audio feedback also entangles voice recognition protocols, which can be used to lock or unlock the phone. Natural modalities of interaction involving speech depend on recognition-based technologies that are characteristically error-prone. For instance, speech recognition is sensitive to audio signal quality, vocabulary size, and variability of voice parameters.
The user interface of the Pixel 4L’s touchscreen has multiple buttons and functions displayed depending on the particular application being used. For the interaction to be significant, the user must accurately select targets and avoid selecting adjacent targets. For instance, when selecting the internet browser, the user must purposely use their finger to point and touch the internet browser icon. Otherwise, they may end up selecting the neighboring icons of other applications. Volume, power, the screen, and camera also act as signifiers of the phone’s multiple computing functions when the device is in its off state.
Most applications have the primary content displayed at the center of the screen and secondary content at the top and bottom edges of the screen. This is most likely because mobile phone users commonly focus their attention on the center of the screen. By placing primary content in the center of the screen, the designers of the Google Pixel 4L also took the ease of use as the center of the screen is usually simpler to touch and manipulate with the thumb finger. It is common for smartphone users to hold the phone with the weaker fingers and manipulate content with the thumb. The display area of the screen is 98.0 cm2. This is sufficient for the touch modality and significantly more extensive than what other similar products offer.
The Google Pixel 4L comes with a relatively small screen and has a battery limit that means users are only able to interact with the device for limited sessions. additionally, only a single application is vieable at a time. sometimes, connectivity to other devices and netowrks is variable depending on hardware limitations and network variables.
In conclusion, Google Pixel 4L is a multimodal device that relies on many modalities to enable interaction and functionality.