#Chapter - 7 @HCI (Important Points)
#Chapter - 7 @HCI (Important Points)
#Chapter - 7 @HCI (Important Points)
INTERFACE
Multimedia :
~ Combines different media within a single interface with various forms of interactivity
- Graphics, text, video, sound, and animation
~ Users click on links in an image or text
- Another part of the program
- An animation or a video clip is played
- Users can return to where they were or move on to another place
~ Can provide better ways of presenting information than a single media can
~ How to design multimedia to help users explore, keep track of, and integrate the multiple
representations
- Provide hands-on interactivities and simulations that the user has to complete to
solve a task
- Provide quizzes, electronic notebooks, and games
~ Multimedia good for supporting certain activities, such as browsing, but less optimal for
reading at length
Virtual Reality :
~ Computer-generated graphical simulations providing:
- “the illusion of participation in a synthetic environment rather than external
observation of such an environment” (Gigante, 1993)
~ Provide new kinds of experience, enabling users to interact with objects and navigate in
3D space
~ Create highly-engaging user experiences
~ Much research on how to design safe and realistic VRs to facilitate training
- For example, flying simulators
- Help people overcome phobias (for example, spiders or talking in public)
~ Design issues
- How best to navigate through them (for instance, first versus third person)
- How to control interactions and movements (for example, by using head and
body movements)
- How best to interact with information (for instance by using keypads, pointing,
and joystick buttons)
- Level of realism to aim for to engender a sense of presence
Website Design :
~ Early websites were largely text-based, providing hyperlinks
~ Concern was with how best to structure information to enable users to navigate and
access them easily and quickly
~ Nowadays, more emphasis is on making pages distinctive, striking, and aesthetically
pleasing
~ Need to think of how to design information for multiple platforms—keyboard or touch?
- For example, smartphones, tablets, and PCs
~ Many books and guidelines on website design
~ Veen’s (2001) three core questions to consider when designing any website:
1) Where am I?
2) Where can I go?
3) What’s here?
Mobile Interfaces :
~ Handheld devices intended to be used while on the move
~ Have become pervasive, increasingly used in all aspects of everyday and working life
- For example, phones, fitness trackers, and smartwatches
~ Larger-sized tablets used in mobile settings
- Including those used by flight attendants, marketing professionals, and at car
rental returns
~ Mobile interfaces can be cumbersome to use for those with poor manual dexterity or ‘fat’
fingers
~ Key concern is hit area:
- Area on the phone display that the user touches to make something happen,
such as a key, an icon, a button, or an app
- Space needs to be big enough for all fingers to press accurately
- If too small, the user may accidentally press the wrong key
- Fitts’ law can be used to help design right spacing
Minimum tappable areas should be 44 points x 44 points for all controls
Appliances :
~ Everyday devices in home, public places, or car
- For example, washing machines, remotes, toasters, printers, and navigation
systems)
~ And personal devices
- For instance, digital clock and digital camera
~ Used for short periods
- For example, starting the washing machine, watching a program, buying a ticket,
changing the time, or taking a snapshot
~ Need to be usable with minimal, if any, learning
Touchscreen :
~ Provides fluid and direct styles of interaction involving freehand and pen-based gestures
for certain tasks
~ Core design concerns include whether size, orientation, and shape of touch displays effect
collaboration
~ Much faster to scroll through wheels, carousels, and bars of thumbnail images or lists of
options by finger flicking
~ Gestures need to be learned for multi-touch, so a small set of gestures for common
commands is preferable
~ More cumbersome, error-prone, and slower to type using a virtual keyboard on a touch
display than using a physical keyboard
Gesture-based System :
~ Gestures involve moving arms and hands to communicate
~ Uses camera recognition, sensor, and computer vision techniques
- Recognize people’s arm and hand gestures in a room
- Gestures need to be presented sequentially to be understood (compare with the
way sentences are constructed)
~ How does computer recognize and delineate user’s gestures?
- Start and end points?
- Difference between deictic and hand waving
~ How realistic must the mirrored graphical representation of the user be in order for them to
be believable?
Haptic Interfaces :
~ Provide tactile feedback
- By applying vibration and forces to a person’s body, using actuators that are
embedded in their clothing or a device they are carrying, such as a smartphone
~ Vibrotactile feedback can be used to simulate the sense of touch between remote people
who want to communicate
~ Ultrahaptics creates the illusion of touch in midair using ultrasound to make the illusion of
3D shapes
Shareable Interfaces :
~ Designed for more than one person to use:
- Provide multiple inputs and sometimes allow simultaneous input by co-located
groups
- Large wall displays where people use their own pens or gestures
- Interactive tabletops where small groups interact with information using their
fingertips
For example, DiamondTouch, Smart Table, and Surface
~ Core design concerns include whether size, orientation, and shape of the display have an
effect on collaboration
~ Horizontal surfaces compared with vertical ones support more turn-taking and
collaborative working in co-located groups
~ Providing larger-sized tabletops does not improve group working but encourages more
division of labor
~ Having both personal and shared spaces enables groups to work on their own and in a
group
- Cross-device systems have been developed to support seamless switching
between these, for example, SurfaceConstellations
Tangible Interfaces :
~ Type of sensor-based interaction, where physical objects, for example, bricks, are coupled
with digital representations
~ When a person manipulates the physical object/s, it causes a digital effect to occur, for
example, an animation
~ Digital effects can take place in a number of media and places, or they can be embedded
in the physical object
~ What kinds of conceptual frameworks to use to help identify novel and specific features
~ What kind of coupling to use between the physical action and digital effect
- If it is to support learning, then an explicit mapping between action and effect is
critical
- If it is for entertainment, then it can be better to design it to be more implicit and
unexpected
~ What kind of physical artifact to use
- Bricks, cubes, and other component sets are most commonly used because of
flexibility and simplicity
- Stickies and cardboard tokens can also be used for placing material onto a
surface
~ With what kinds of digital outputs should tangible interfaces be combined?
Augmented Reality :
~ Augmented reality: Virtual representations are superimposed on physical devices and
objects
~ Pokémon Go made it a household game
- Used smartphone camera and GPS to place virtual characters onto objects in the
environment as if they really are there
~ Many other applications including medicine, navigation, air traffic control, games, and
everyday exploring
~ Comfort
- Needs to be light, small, not get in the way, fashionable, and preferably hidden in
the clothing
~ Hygiene
- Is it possible to wash or clean the clothing once worn?
~ Ease of wear
- How easy is it to remove the electronic gadgetry and replace it?
~ Usability
- How does the user control the devices that are embedded in the clothing?
Robots :
~ Main types
~ Remote robots used in hazardous settings
- Can be controlled to investigate bombs and other dangerous materials
~ Domestic robots helping around the house
- Can pick up objects and do daily chores like vacuuming
~ Pet robots as human companions
- Have therapeutic qualities, helping to reduce stress and loneliness
~ Sociable robots that work collaboratively with humans
- Encourage social behaviors
~ How do humans react to physical robots designed to exhibit behaviors (for example,
making facial expressions) compared with virtual ones?
~ Should robots be designed to be human-like or look like and behave like robots that serve
a clearly-defined purpose?
~ Should the interaction be designed to enable people to interact with the robot as if it was
another human being or more human-computer-like (for example, pressing buttons to issue
commands)?
~ Is it acceptable to use unmanned drones to take a series of images or videos of fields,
towns, and private property without permission or people knowing what is happening?
Brain-computer Interfaces :
~ Brain-computer interfaces (BCI) provide a communication pathway between a person’s
brain waves and an external device, such as a cursor on a screen
~ Person is trained to concentrate on the task, for example, moving the cursor
~ BCIs work through detecting changes in the neural functioning in the brain
~ BCIs apps:
- Games (for example, Brain Ball)
- Enable people who are paralyzed to control robots
Smart Interfaces :
~ Smart: phones, speakers, watches, cars, buildings, cites
~ Smart refers to having some intelligence and connected to the internet and other devices
~ Context-aware
- Understand what is happening around them and execute appropriate actions, for
example, a Nest thermostat
~ Human-building interaction
- Buildings are designed to sense and act on behalf of the inhabitants but also
allow them to have some control and interaction with the automated systems