Nothing Special   »   [go: up one dir, main page]

HCI CSC414

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

1.

0 Types of Human Computer Interaction (HCI)


i. Graphical User Interface (GUI): A type of interface that allows users to
interact with a computer system through graphical icons.
ii. Menu Driven Interface: The type of interface that allows people to
interact with a computer system by presenting the user and allowing them
to work through a series of menus
iii. Voice Driven Interface: A Command Line Interface is an entirely text
based interface that allows the user to communicate with the system by
typing in a command. However the computer will only execute specific
predefined commands. Before GUI's were developed, CLI's were the most
widely used form of interface.
iv. Command Line Interface (CLI): A Command Line Interface is an
entirely text based interface that allows the user to communicate with the
system by typing in a command. However the computer will only execute
specific predefined commands. Before GUI's were developed, CLI's were
the most widely used form of interface.
v. Touch Sensitive Interface: More widely known as touchscreens, touch
sensitive interfaces are popular and are used extensively in mobile devices.
Commands are issued and data is input via a finger or stylus pen. As well
as tapping, over actions with the finger are recognised by touch sensitive
interfaces, such as pinching and swiping.

2.0 Interaction in HCI


Interaction refers to the communication between user and system that involves at
least two participants: user and system. The purposes of interactive system to aid
a user achieving goals from some application domain. Human beings interact
with a zillion devices and systems every day. Designers of devices are constantly
trying to understand what is it that makes the difference between devices, that are
easy to use and those that are not? How should one design systems that support
easy user operation?
1
2.1 Terms of Interaction
i. Domain: An area of expertise and knowledge in some real-world activity.
Example: graphic design, authoring and process control in a factory.
ii. Goal: The desired output from a performed task
iii. Task: Operation to manipulate the concepts of a domain
iv. Intention: A specific action required to meet the goal

2.2 Models of Interaction in HCI


Interaction models help to understand what is going on in the interaction between
user and system. Human-computer interaction puts a lot of importance on these
intersecting features. How can we map human abilities to a computer’s abilities?
How do humans and computers interact? The interaction takes place within social
and organisational context that affects both user and system. There are several
models that researchers have applied.

(i) Norman’s Model of Interaction


The Donald Norman Model of Interaction which also called the execution-
evaluation cycle, it is the most noticeable model in the Human-Computer
Interaction. This was first introduced in 1988. This model will provide a
framework for examining the interaction. Norman's focal idea is that devices,
things, computers, and interfaces should be functional, easy to use, and intuitive
(Some systems are harder to use than others). His notion is that there are two gulfs
to avoid: the gulf of execution and the gulf of evaluation. This model will function
with the user choosing the goal, formulate the plan of action and then execute at
the computer interface. When the plan have been executed the user will observe
the computer interface to evaluate the results of the execution plan and then
determine the further actions that will be taken. The model will be influenced by
the ergonomics of the interface and will have two major phases.

2
The Execution phase will focus on the differences that exist between the user’s
formulations of show more content. This is the action state that the system would
be in and would determine the actions that are actually available and the type if
information that the system would actually need in order to operate. The last stage
is the Evaluation of the System State with respect to the goal. This means that the
actual state that the system is in would be used to evaluate the actions available
to the users and how the people would be allowed to interact with the system
through the interface. This would then be used as a basis to determine whether
the goals that was established in the first stage was met. The Norman’s model of
interaction is divided into seven primary stages:
i. Forming the goal
ii. Forming the intention
iii. Specifying an action
iv. Executing the action
v. Perceiving the state of the world
vi. Interpreting the state of the world
vii. Evaluating the outcome
Norman's HCI Model Norman defines two issues with these seven stages: the
gulf of execution and the gulf of evaluation.

i. Gulf of Execution: There is a difference between user actions and those


that the system can perform. An effective interface allows a user to perform
an action without system limitations.
ii. Gulf of Evaluation: There is a difference between the presentation of an
output and the user’s expectations. An effective interface can be easily
evaluated by a user.

(ii) Abowd and Beale’s Interaction


The Abowd and Beale Model of interaction is built upon the Norman Model but
it is designed to be more realistic. Extension of Norman’s model, including the

3
system communication explicitly. This model focus on the four major
components:
Input (I), with the input language
Output (O), with the output language

System (S) with core language

User (U) with the task language

According to this model, the user is not solely responsible for the interaction in
the Human-Computer Interaction framework. According to this model, the
system will also have a role to play. Abowd and Beale's interaction framework
identifies system and user components which communicate via the input and
output components of an interface. This communication follows a similar cyclic
sequence of steps from the user’s articulation of a task, through the system’s
performance and presentation of the task, to the user’s observation of this task’s
results, upon which the user can formulate further tasks. The framework
introduces languages for input and output in addition to the core and task
languages. By concentrating on the language translations, the interaction
framework allows us to determine if the concepts are being communicated
correctly.

Interaction Steps: The interaction framework identifies four steps in the


interaction cycle:
i. the user formulates the goal and a task to achieve that goal
ii. the interface translates the input language into the core language

4
iii. the system transforms itself into a new state
iv. the system renders the new state in the output language and sends it to the
user Translations
The four translations involved in the interaction framework are
 Articulation - the user articulates the task in the input language
 Performance - the interface translates the input language into stimuli for
the system
 Presentation - the system presents the results in the output language
 Observation - the user translates the output language into personal
understanding

3.0 Ergonomics (Human Factors) of Interaction


Ergonomics is the scientific study of work: the people who do it and the ways in
which it is done. Ergonomics involve the study of the physical characteristics of
interaction Also known as human factors, but this can also be used to mean much
of HCI. Ergonomics is designing a job to fit the worker so the work is safer and
more efficient. Implementing ergonomic solutions can make employees more
comfortable and increase productivity. Ergonomics good at defining standards
and guidelines for constraining the way we design certain aspects of system It
is concerned (for example) with the tools people use, the places that they work in
and the procedures and practices that they follow. In other words, ergonomics is
concerned with the design of working systems. Another key component of HCI
is ergonomics.

Human Computer Interaction is ergonomics by definition. Ergonomics is about


designing artefacts in such a way that the usability is optimized for the capabilities
and limitations of humans. Ergonomics is certainly not just about physical
aspects, but also about perceiving and cognition: Hence Human Computer
Interaction is mainly the domain of what we call ‘Cognitive Ergonomics’. Once
we understand how a computer and user interact via an interface, there is a need

5
for better understanding how to enhance user performance. Another words for
.Ergonomics are functional design, user-friendly systems, human factors,
workplace efficiency, comfort design. There are many ergonomical factors that
come into play when designing a system:

i. Controls and display: Display sections and controls should be grouped


logically according to human perception. The logic of arrangement
depends on the application and the domain, such as by sequence, function,
or frequency.
ii. Colours: Since humans are limited by visual perception, it’s key to design
colour properly. Colours should always be distinct, and the distinction of
colours should remain unaffected by changing contrast. Common colour
conventions should also be used (for example, red for a warning and green
for success).
iii. Storage: Although the modern business world is largely digital, the person
using the workstation may still need places to store items such as office
supplies, documents and mail. You should design the workstation with
these needs in mind, allowing room to store items and minimize clutter.
iv. Equipment Weight: The workstation's desk must be strong enough to
hold a monitor securely. If you plan to use a computer with a horizontal
chassis, the desk will have to hold it as well. Allowing for these items and
other peripherals such as a printer, the desk should be rated for a load of
100 pounds or more. You may need to purchase a desk with an increased
weight capacity if you plan to use multiple large monitors.
v. Networking: Most people require an Internet connection or access to
network resources when they work. If your office has Ethernet wiring, you
can facilitate this by placing the workstation near an Ethernet port.
Otherwise, you may need to consider a location where you can drill a hole
through the wall or floor to avoid cable clutter. Wireless networking is also

6
an option; in this case, the workstation should be placed as far as possible
from metal or brick walls that can degrade the signal or cause interference.
vi. Lighting and Windows: Using a computer in a poorly lit room can lead
to eyestrain and headaches. When designing a computer workstation,
consider its position. Windows can allow sufficient light into a room for
computer use during the day, but the light should be indirect; direct light
from the sun can cause glare when reflecting off a monitor. At night, a good
overhead light is vital. If the user will need to read physical documents at
the workstation, consider adding a desk light.

4.0 Design Principles of HCI


The user interacts directly with hardware for the human input and output such as
displays, e.g. through a graphical user interface. The user interacts with the
computer over this software interface using the given input and output (I/O)
hardware. Software and hardware are matched so that the processing of the user
input is fast enough, and the latency of the computer output is not disruptive to
the workflow. The following experimental design principles are considered, when
evaluating a current user interface, or designing a new user interface:
i. Early focus is placed on the user(s) and task(s): How many users are
needed to perform the task(s) is established and who the appropriate users
should be is determined (someone who has never used the interface, and
will not use the interface in the future, is most likely not a valid user). In
addition, the task(s) the users will be performing and how often the task(s)
need to be performed is defined.
ii. Empirical Measurement: the interface is tested with real users who come
in contact with the interface daily. The results can vary with the
performance level of the user and the typical human–computer interaction
may not always be represented. Quantitative usability specifics, such as the
number of users performing the task(s), the time to complete the task(s),
and the number of errors made during the task(s) are determined.

7
iii. Iterative Design: After determining what users, tasks, and empirical
measurements to include, the following iterative design steps are
performed:
(a) Design the user interface
(b) Test
(c) Analyse results
(d) Repeat
The iterative design process is repeated until a sensible, user-friendly interface is
created.

5.0 Graphical User Interface Design


Graphical User Interface (GUI), a computer program that enables a person to
communicate with a computer through the use of symbols, visual metaphors, and
pointing devices. GUIs help your users do things within your device, platform,
program, or app without needing to type commands or know the coding behind
the action. Some specific examples are:

 Moving a document into the “Trash” folder on your desktop


 Clicking on an icon to launch an application
 Moving files from one folder to another

Since the dawn of computers, developers and designers have dreamed of creating
friendly human-computer interaction (HCI). These HCIs make for computer
operations that are intuitive and easy to learn without prior practice or knowledge
of specific computer languages. Creating a graphical user interface (GUI), which
allows users to directly interact with their devices and complete certain tasks by
manipulating elements like icons and scroll bars, is one way designers make their
digital devices more efficient and usable. Best known for its implementation in
Apple Inc.’s Macintosh and Microsoft Corporation’s Windows operating system,
the GUI has replaced the arcane and difficult textual interfaces of earlier
computing with a relatively intuitive system that has made computer operation

8
not only easier to learn but more pleasant and natural. The GUI is now the
standard computer interface, and its components have themselves become
unmistakable cultural artefacts.

The Graphical User Interface (GUI) and other forms of user interface design (e.g.,
voice-controlled interfaces) are referred to as UI design. A graphics-based
operating system interface that uses icons, menus and a mouse (to click on the
icon or pull down the menus) to manage interaction with the system. Developed
by Xerox, the GUI was popularized by the Apple Macintosh in the 1980s. At the
time, Microsoft’s operating system, MS-DOS, required the user to type specific
commands, but the company’s GUI, Microsoft Windows, is now the dominant
user interface for personal computers (PCs). A comprehensive GUI environment
includes four components: a graphics library, a user interface toolkit, a user
interface style guide and consistent applications.

The graphics library provides a high-level graphics programming interface. The


user interface toolkit, built on top of the graphics library, provides application
programs with mechanisms for creating and managing the dialogue elements of
the windows, icons, menus, pointers and scroll bars (WIMPS) interface. The user
interface style guide specifies how applications should employ the dialogue
elements to present a consistent, easy-to-use environment (i.e., “look and feel”)
to the user. Application program conformance with a single user interface style
is the primary determinant of ease of learning and use, and thus, of application
effectiveness and user productivity.

5.1 Fundamental Principles of Graphical User Interface Design


Generally accepted principles for Graphical User Interface design are:
i. Aesthetically Pleasing: Provide visual appeal by following these
presentation and graphic design principles:
 Provide meaningful contrast between screen elements.
 Create groupings.
9
 Align screen elements and groups.
 Provide three dimensional representation
 Use colours and graphics effectively and simply.
ii. Clarity: The interface should be visually, conceptually and linguistically
clear, including
 Visual elements
 Function
 Metaphors
 Words and text
iii. Compatibility: Provide compatibility with the following:
 The user
 The task and job
 The product
 Adopt the user’s perspective
iv. Comprehensibility: A system should be easily understood and learned. A
user should know the following
 What to do, What to look at, When to do it, Where to do it, Why to do it,
How to do it
 The flow of actions, responses, visual preparations and information should
be in a sensible order that is easy to recollect and place in context.
v. Configurability: Permit easy personalization, configuration and
reconfiguration of settings.
 Enhances a sense of control
 Encourages an active role in understanding
vi. Consistency: A system should look, act, and operate the same throughput.
Similar components should:
 Have a similar look
 Have similar uses.
 Operate similarly
 The same action should always yield the same result.
10
 The function of the elements should not change
 The position of standard elements should not change.
vii. Control: The user must control the interaction.
 Actions should result from explicit user requests
 Actions should be performed quickly
 Actions should be capable of interruption or termination
 The user should never be interrupted for errors
 The context maintained must be from the perspective of the user.
 The means to achieve goals should be flexible and compatible with the
user’s skills, experiences, habits and preferences.
 Avoid modes since they constrain the actions available to the user.
 Permit the user to customize aspects of the interface, while always
providing a proper set of defaults.
viii. Directness: Provide direct ways to accomplish tasks
 Available alternatives should be visible,
 The effect of actions on objects should be visible.
ix. Efficiency
 Minimize eye and hand movements, and other control actions.
 Transitions between various system controls should flow easily and freely.
 Navigation paths should be as short as possible.
 Eye movement through a screen should be obvious and sequential.
 Anticipate the user’s wants and needs whenever possible.
x. Familiarity: Employ familiar concepts and use a language that is familiar
to the user.
 Keep the interface natural, mimicking the user’s behaviour patterns.
 Use real world metaphors.
xi. Flexibility A system must be flexible to the different needs of its users,
enabling a level and type of performance based upon:
 Each user’s knowledge and skills.
 Each user’s experience.
11
 Each user’s personal preference
 Each user’s habits
 The conditions at that moment
xii. Forgiveness
 Tolerate and forgive common and unavoidable human errors
 Prevent errors from occurring whenever possible.
 Protect against possible catastrophic errors.
 When an error does occur, provide constructive messages.
xiii. Predictability: Users should be able to anticipate the natural progression
of the task.
 Provide distinct and recognizable screen elements
 Provide cues to the result of an action to be performed
 All expectations should be fulfilled uniformly and completely.
xiv. Recovery: A system should permit:
 Commands or actions to be abolished or reversed.
 Immediate return to a certain point if difficulties arise.
Ensure that users never lose their work as a result of
 An error on their part
 H/W, S/W or communication problems.
xv. Responsiveness The system must rapidly respond to the user’s requests.
 Provide immediate acknowledgement for all user actions
 Visual
 Textual
 Auditory
xvi. Simplicity
 Provide as simple an interface as possible
 Provide defaults
 Minimize screen alignment points.
 Make common actions simple at the expense of uncommon actions being
made harder.
12
 Provide uniformity and consistency
Three ways to provide simplicity:
 Present common and necessary functions first.
 Prominently feature important functions,
 Hide more sophisticated and less frequently used functions
xvii. Transparency
 Permit the user to focus on the task or job, without concern for the
mechanics of the interface.
 Workings and reminders of workings inside the computer should be
invisible to the user.
xviii. Trade-offs
 Final design will be based on a series of trade-offs balancing often-
conflicting design principles
 People’s requirements always take precedence over technical requirements

Advantages of Graphical User Interfaces


(i) Lower learning curve. With a GUI, users don’t need to learn specific
commands or have expert computer skills.
(ii) Lower interaction cost. The user doesn’t have to type commands using a
keyboard; they can navigate to the graphical object and click or tap on it to
perform the action.
(iii) Immediate feedback. Users manipulate objects in real-time and can see
the results of their actions.

Disadvantages of Graphical User Interfaces


(i) Easier to make errors. To make an error in a command-line interface, you
need to type a command and execute it. To make an error in the GUI, all
you need to do is to click the wrong button.

13
(ii) Built-in limitations. Unlike with GUIs, the command-line interface offers
more freedom and flexibility for experienced users, allowing them to
execute some complex operations or tweak system confirmation.

5.2 How does a Graphical User Interface work?


GUIs consist of graphical elements that users interact with. The most common
paradigm of GUIs is the windows, icons, menus, and pointer (WIMP). The WIMP
paradigm refers to virtual input devices controlled by a physical pointing device
(a mouse), the content containers (windows), and graphical objects used to initiate
some actions (icons and menus). Most graphical user interfaces reference the
model-view-controller (MVC) pattern. This pattern separates internal
representations of information (model) from the manner in which users receive it
(view). The controller acts as a medium between two parties as given below:

The model-view-controller pattern in GUI design.


MVC allows for flexible structures, so you can redesign elements without any
changes to the model. The view then becomes almost like a visual skin that
designers can apply to the same business logic of the application.

5.3 Design Process of GUI

14
It’s impossible to think about GUI design in isolation from a product that will use
it. Thus, there are general five stages of the design thinking process (Empathize,
Define, Ideate, Prototype, and Test) that product design teams use.

I. Empathize: Research Users' Needs


The team aims to understand the problem, typically through user research.
Empathy is crucial to design thinking because it allows designers to set aside your
assumptions about the world and gain insight into users and their needs.

II. Define: State Users' Needs and Problems


Once the team accumulates the information, they analyze the observations and
synthesize them to define the core problems. These definitions are
called problem statements. The team may create personas to help keep efforts
human-centered.

III. Ideate: Challenge Assumptions and Create Ideas


With the foundation ready, teams gear up to “think outside the box.” They
brainstorm alternative ways to view the problem and identify innovative solutions
to the problem statement.

IV. Prototype: Start to Create Solutions


This is an experimental phase. The aim is to identify the best possible solution
for each problem. The team produces inexpensive, scaled-down versions of the
product (or specific features found within the product) to investigate the ideas.
This may be as simple as paper prototypes.

15
V. Test: Try the Solutions Out
The team tests these prototypes with real users to evaluate if they solve the
problem. The test might throw up new insights, based on which the team might
refine the prototype or even go back to the Define stage to revisit the problem.

5.4 Graphical User Interface (GUI) Toolkits


A widget is an element of a graphical user interface (GUI) that displays
information or provides a specific way for a user to interact with the operating
system or an application. Widgets include icons, pull-down menus, buttons,
selection boxes, progress indicators, on-off checkmarks, scroll bars, windows,
window edges (that let you resize the window), toggle buttons, form, and many
other devices for displaying information and for inviting, accepting, and
responding to user actions. A widget toolkit, widget library, GUI toolkit, or UX
library is a library or a collection of libraries containing a set of graphical control
elements used to construct the graphical user interface of programs. Most widget
toolkits additionally include their own rendering engine. Most widget toolkits
additionally include their own rendering engine. This engine can be specific to a
certain operating system or windowing system or contain back-ends to interface
with more multiple ones and also with rendering APIs such as OpenGL, OpenVG,
or EGL. The look and feel of the graphical control elements can be hard-coded
or decoupled, allowing the graphical control elements to be themed/skinned.

Some toolkits may be used from other languages by employing language


bindings. Graphical user interface builders such as e.g. Glade Interface Designer
facilitate the authoring of GUIs in a WYSIWYG manner employing a user
interface markup language such as in this case GtkBuilder. The GUI of a program
is commonly constructed in a cascading manner, with graphical control elements
being added directly to on top of one another. Most widget toolkits use event-
driven programming as a model for interaction. The toolkit handles user events,
for example when the user clicks on a button. When an event is detected, it is

16
passed on to the application where it is dealt with. The design of those toolkits
has been criticized for promoting an oversimplified model of event-action,
leading programmers to create error-prone, difficult to extend and excessively
complex application code. Finite state machines and hierarchical state machines
have been proposed as high-level models to represent the interactive state changes
for reactive programs.

Types of Toolkits (Widgets)


There are in general, four types of widgets:
 Information widgets
 Collection widgets
 Control widgets
 Hybrid widgets

6.0 Human-Centred Software Evaluation and Development


The term user-centered evaluation refers to evaluating the utility and value of
software to the intended end-users. It is defined as an empirical evaluation
obtained by assessing user performance and user attitudes toward a system, by
gathering subjective user feedback on effectiveness and satisfaction, quality of
work, support and training costs or user health and well-being. The term user-
centered evaluation refers to evaluating the utility and value of software to the
intended end-users. While usability (ease of use of software) is certainly a
necessary condition, it is not sufficient. User-centered evaluation is understood
as evaluation conducted with methods suited for the framework of user-centered
design. Within the framework of user-centered design, it is typically focused on
evaluation the usability of the system, possibly along with additional evaluations
of the users' subjective experiences including factors like enjoyment and trust.
User-centered evaluation may also focus on the user experience of a company or
service through all channels of communication.

17
The goal of human centred software development is to produce software products
that are designed and developed around the users’ needs and requirements from
the very beginning of the development process. “Human-centred design is a
creative approach to interactive systems development that aims to make systems
usable and useful by focusing on the users, designing around their needs and
requirements at all stages, and by applying human factors/ergonomics, usability
knowledge, and techniques. This approach enhances effectiveness and efficiency,
improves human well-being, user satisfaction, accessibility and sustainability;
and counteracts possible adverse effects of use on human health, safety and
performance satisfaction, quality of work, support and training costs or user
health and well-being.

Human-Centered Software Development


Creating software is a human-centered endeavour. The software that we build is
used directly by human users. Whether we’re building a mobile application,
service, or tool, the cumulative moments of delight (or abhorrence) that our
human users experience determine the efficacy of the software we build. For this
reason, software organizations invest in user experience design, in addition to the
nuts-and-bolts of the technology under the hood. Product designers leverage their
expertise in interaction psychology to create user flows that are useful, familiar,
and discoverable. Designers understand that application interfaces are only as
effective as their users find them to be useful. Therefore, the best user experiences
incorporate visual components that facilitate task completion while working with
the psychological heuristics that humans intuitively employ. Humans seek
sensory cues and constraints to navigate a deluge of stimuli and process large
amounts of information, and we do so while interacting with computing interfaces
as well as the real world.

In essence, software organizations are constantly trying to optimize how humans


interact with computers. Today, the practice of using design to hone user

18
experience is predominantly applied to visual interfaces. These are commonly the
front ends of applications, where humans interact with the visually rendered
software (in a browser or mobile app). What about the act of software
development itself?

19

You might also like