Cognitive Aspects<!-- --> | <!-- -->Yixuan Dai
An in-game scene with light and dark wooden pieces strategically placed on a board
An in-game scene with light and dark wooden pieces strategically placed on a board by: Bernd 📷 Dittrich
Cognitive Aspects

23 07 2022

Interaction Design

Introduction

Imagine that you are writing a report dues soon at a night, and kept disturbed by different activities, e.g., messages, phone calls, and social networks. Your attention was diverted unconsciously, and when you realise time has passed, you get more panic with the report.

It has been increasingly common for people to be switching their attention among multitasking, the study of human cognition can help us understand the impact of multitasking on human behaviour and other digital behaviours. This chapter covers these aspects by examining the cognitive aspects of interaction design, and then informs the design of technologies by describing relevant cognitive theories.

What is Cognition

There are many kinds of cognition, e.g., thinking, remembering, learning…

One way to distinguish them is in terms of whether it is experiential or reflective.

  • Experiential cognition is a state of mind where people perceive, act, and react to events around them intuitively and effortlessly, e.g., driving a car, reading a book and watching a film.
  • Reflective cognition involves mental efforts, attention, judgement, and decision-making, which can lead to new ideas and creativity. e.g., designing, learning, and writing a report.

Another way to distinguish them is in terms of fast and slow thinking.

  • Fast thinking is similar to Don Norman’s experiential mode insofar as it is instinctive, reflective, and effortless, it has no sense of voluntary work. e.g., asking 2 + 2 =, most adults can answer without thinking
  • Slow thinking takes more time is considered to be more logical and demanding, it requires great concentration. e.g., asking 21 x 19 =, which requires more mental efforts.

Other ways of describing cognition are in terms of the context, tools, interfaces, depending on when, where, and how it happens, cognition can be distributed, situated, extended, and embodied. Cognition has also been described in terms of specific kinds of processes, including attention, perception, memory, learning, (reading, speaking, and listening), (problem-solving, planning, reasoning, and decision-making)

Many of the cognitive processes are interdependent, several may be involved in one activity, and it is rare for one to occur in isolation. e.g., reading a book requires attention, perception and recognition of the text, and making sense of the sentences read. Details of the cognitive process are described below, for interaction design, the attention and memory are the most relevant.

Attention

Attention is central to everyday life, it involves selecting things on which to concentrate, allowing us to focus on relevant information. The extent to which this process is easy or difficult depends on 1) whether someone has a clear goal, 2) whether the information they need is salient in the environment.

Clear Goal

If someone knows exactly what they want to find out, they try to match this with the available information.

For example, when someone has just landed at an airport with no onboard WiFi, and they want to know who won the World Cup, they may scan the headlines on their phone or look at breaking news on a public TV display.

When someone is not sure exactly what they want, they might browse through the information, allowing it to guide their attention to interesting or salient items.

For example, someone may have a vague idea about what to eat in a restaurant, they will peruse the menu, letting their attention be drawn to the imaginative description of various dishes. After that, other factors may also be considered, e.g., cost, recommendations…

Information Presentation

The way information is displayed can also affect how easy and difficult it is to comprehend appropriate pieces of information.

Screenshot 2022-07-16 at 17.31.56.png

The primary reason for the disparity is the way that the characters are grouped in the display. In (a), they are grouped into vertical categories of information (that is, place, type of accommodation, phone number, and rates), and this screen has space in between the columns of information. In (b), the information is bunched together, making it much more difficult to search.

Multitasking and Attention

Many people now multitask, frequently switching their attention among different tasks, e.g., watching TV, using a smartphone, or reading…

One research on the effects of multitasking on memory and attention shows that the detrimental effects on different tasks when multitasking depends on the nature of the task and how much attention each demand. e.g., listening to gentle music while working helps people to focus on what they are doing by tuning out background noise, but if the music is loud, like heavy metal, it can be distracting.

One research suggests that individual differences can also affect, heavy media multitaskers (frequently switch among many) are more prone to be distracted and difficult to filter irrelevant information. Infrequent multitaskers are better at allocating their attention when facing competing distractions.

However, one recent research shows that it is relevant to the sources that multitasked with, if the distracting source is relevant to the task at hand, heavy multitaskers can also put this to good use, though they are easily distracted.

In summary, multitasking can be both good or bad, it depends on the distracting source and its relevance.

Multitasking was thought to be detrimental for human performance because it overloads people’s capacity to focus, requiring time and effort to switch between tasks. It can also result in people losing their train of thought, making errors, and needing to start over. Nevertheless, many people are expected to multitask, e.g., clinicians checking information on different displays.

In several, multitasking can be detrimental to performance, the cost of switching attention varies from people and resources, technologies should be designed to help people easily switch their attention in their work settings.

Perception

Perception refers to how information in acquired from the environment via the five sense organs (vision, hearing, taste, smell, touch) and transformed into experience of objects, events, sounds, and tastes. Addition sense of kinaesthesia is also available. Vision is the most dominant sense for sighted individuals, followed by hearing, and touch. So it is important to present information that can be readily perceived with respect to interaction design.

Grouping items together and leaving space (blank space, white space) between them can aid attention as it breaks up the information. A study also shows that using a border is more effective to locate items from information than using contrasting colours.

Screenshot 2022-07-17 at 21.13.31.png

Memory

Memory involves recalling various kinds of knowledge that allow people to act appropriately. e.g., recognise someone’s face, name, the time last met… But it’s not possible to remember everything since the brain will get overloaded, so a filtering process is used to decide what to further processed and memorised. This filtering process can also introduce problems, e.g., we forget things that we would like to remember and conversely remember things we would like to forget.

How does the filtering process work? Initially, the encoding takes place, determining which information is paid attention to and how it is interpreted. The more attention that is paid to something and the more it is processed in terms of thinking about it and comparing it with other knowledge, the more likely it is to be remembered. e.g., it is better to reflect on a topic, carry out exercises, and discuss with others when learning a new topic.

Another factor is the context where the information is encoded. Sometimes it can be hard to recall information that was encoded in a different context from the one where they are at present. e.g., you may not readily recognise one of your neighbours you are only used to seeing in the hallway of the apartment when you are on the train, but you can easily recognise them in the hallway of the apartment.

Another well-known phenomenon is that people tend to be better at recognising things rather than recalling things, and certain information is easier to recognise than others. e.g., people are good at recognising thousands of pictures even if they have only seen them briefly before, but they are not good at remembering details about the photos they took at a museum. People seem to remember fewer details of objects when they have photographed than when they have observed with naked eyes, because people focus more on framing the photos and less on the details of the object, so less information about the object is processed compared with when people are actually looking at it.

Increasingly, people rely on the Internet to reduce the need to remember the information itself, e.g., using Facebook to remember people’s birthdays.

Personal Information Management

The number of documents written, images created, URLs bookmarked, music downloaded, and so on, increases every day. A common practice for people to store these files on a phone/computer is called personal information management (PIM).

The design challenge is deciding which is the best way of helping users organise their content so that it can be easily accessed. This is difficult when the number of content is large, because it is frustrating if an item is not easy to locate and the user spent a lot of time opening several folders searching since they don't remember where the file is and its name.

One model is proposed by Ofer Bergman and Steve Whittaker to help people manage their ‘digital stuff’ based on curation, which involves three interdependent processes:

  • How to decide what personal information to keep?
  • How to organise that information when storing it?
  • Which strategies to use to retrieve it later?

The first stage can be assisted by the system they use, e.g., email, photos are stored by default. Users have to decide where to place the information in folders or delete. In contrast, when browsing the web, they have to consciously decide whether a site is worth bookmarking for revisiting.

A number of ways of adding metadata to documents have been developed to help this, e.g., time stamping, assigning colours, categorisation…, though many people still prefer the old-fashioned way (use folder), one reason is that folder provides a powerful metaphor. It has also been found that there is a strong preference for scanning across and within folders rather than typing a term into a search engine, partial reason is that people forget the name of the file and such behaviour requires more cognitive effort than navigating through folders.

To help with searching, tools are developed to enable them to type a partial name or the first letter of a file and search them in the entire system, e.g., Apple’s Spotlight.

Memory Load and Passwords

Phone, online and mobile banking allow customers to carry out finical transactions, but one problem confronting banks that provide these capabilities is how to manage security concerns. One solution is to ask the user to provide several pieces of information before gaining access to their accounts, which is called multi-factor authentication (MFA), e.g., ZIP code; their mother’s maiden name; birthplace; last school attended; or even memorable address or memorable data. People can remember their familiar information easily, e.g., the first few, but the last two answers are relatively difficult to come up with and recall readily.

Sometimes password may be requested, however, to avoid the password being overheard or oversaw by someone in the vicinity, the customer is usually asked to provide specific letters or numbers from it, e.g., the 7th character of the password. Obviously, the answer cannot be readily recalled and requires time to count each character accordingly. To make things harder, banks may randomise the questions to prevent someone remember the sequence of information, which also the customer has to generate different information each time.

The requirement of memorising and recalling such information put a big memory load, some people find it nerve-racking and are prone to forget, so they may write down the details on paper, which makes it easier to read off the information rather than recall from memory, but such behaviour also makes them vulnerable. One potential solution relies on computer vision and biometrics technologies that can use facial and Touch ID to enable password-free mobile banking. Other solution like SenseCam from Microsoft Research Cambridge uses a camera to help memory loss.

Learning

Learning is closely connected with memory, it involves the accumulation of skills and knowledge that would be impossible to achieve with memory, and people would not be able to remember things unless they had learnt them. Learning can be incidental and intentional. Incidental learning occurs without any intention, e.g., recognising streets. Intentional learning is goal-directed with the goal of being able to remember something, e.g., studying for an exam. It is much harder and requires much conscious effect, so the software developer cannot assume that users are able to learn how to use the product.

It is well-known that people find it hard to learn by reading instructions, instead, they prefer to learn through doing. GUIs and direct manipulation are good environments for supporting this active learning by providing exploratory interaction and allowing undo actions. Numerous attempts to harness the capabilities of different technologies to support intentional learning, examples include online learning, multimedia, VR, e.g., multimedia and AR have been developed to teach abstract concepts, like mathematical formulate. People also can learn effectively when collaborating with others, novel technologies have been designed to support sharing, and working on the same documents. How to enhance the learning is covered in the next chapter.

Reading, Speaking, and Listening

Reading, speaking, and listening are three forms of language processing that have some similar and some different properties. One similarity is that the meaning of sentences remains the same regardless of the mode in which it is conveyed. e.g., the sentence “Computers are a wonderful invention.” has the same meaning whether one reads it, speaks it, or hears it. The differences between the three modes are:

  • Written language is permanent while listening is transient. It is possible to re-read information if not understood the first time around. This is not possible with spoken information unless it is recorded
  • Reading can be quicker than speaking or listening, as written text can be rapidly scanned
  • Listening requires less cognitive effort than reading or speaking. e.g., children prefer to listen to narratives rather than read. The popularity of audiobooks suggests adults also enjoy listening to novels, and so forth.
  • Written language tends to be grammatical, while spoken language is often ungrammatical. For example, people often start talking and stop in midsentence, letting someone else start speaking.
  • Dyslexics have difficulties understanding and recognizing written words, making it hard for them to write grammatical sentences and spell correctly.

Many applications have been developed to help people with difficulties on some modes by capitalising on other modes.

  • Interactive books and apps for reading or learning foreign languages
  • Speech-recognition systems that allow people to use spoken commands
  • Speech-output systems that use artificially generated speech
  • Natural-language interfaces that enable to type in questions and get written responses
  • Interactive apps are designed to help people who find it hard to read, write, or speak. Customised input and output devices that allow people with disabilities to access the content
  • Tactile interfaces that allow visually impaired people to read

Problem-Solving, Planning, Reasoning, and Decision-Making

Problem-solving, planning, reasoning, and decision-making are processes involving reflective cognition. They include thinking about what to do, what the available options are, and the potential consequence. They often involve conscious processes (being aware of what one is thinking about), discussion with others, and the use of artefacts (e.g., books, maps). Reasoning involves working through different scenarios and deciding the best solution, e.g., decide where to go on a vacation by weighing the pros of cons of different options (e.g., location, cost, accommodation…)

When researching on how people make decisions when confronted with information overload, classical rational theories of decision-making posit that it is computational and informational costly because it involves people exhaustively comparing different options and making trade-offs. However, cognitive psychology shows that people tend to use simple heuristics when making decisions. An explanation is that people have evolved to act quickly, making good enough decisions by fast and frugal heuristics, people will ignore most of the information and rely on a few important one, e.g., shopper will buy the brands that they recognise, low-priced, or with attractive packages. This suggest that an effective design strategy is to make key information about a product salient, but what is truly salient varies from people to people. Thus, instead proving ever more information, a better strategy is to provide just enough information via technologies, e.g., using AR and wearable technologies that have glanceable displays that can represent key information in an easy-to-digest form.

Design Implications

Attention

  • Consider context, make information salient when it requires attention
  • Use techniques to achieve this, e.g., animated graphics, colour, ordering, underlining, spacing…
  • Avoid cluttering visual interfaces with too much information. e.g., using lots of colours, graphics
  • Consider designing different ways of supporting effective switching and returning to a particular interfacer, e.g., light gradually gets brighter, voice or sounds…

Perception

Representations of information need to be designed to be perceptible and recognisable across different media.

  • Design icons and other graphical representations so that users can readily distinguish between them
  • Obvious separators and white are effective for grouping information that makes it easy to perceive and locate items
  • Design audio sounds to be readily distinguishable from one another
  • Choosing proper colour contrasts, especially for text
  • Haptic feedback should be used judiciously, and the kind of haptics should be easily distinguishable. Overuse of haptics can be confused, Apple suggests providing haptic feedback in response to user-initiated actions, e.g., unlocking Mac using Apple Watch

Memory

  • Reduce cognitive load by avoiding long and complicated procedures for carrying out tasks
  • Design interfaces that promote recognition rather than recall by using familiar interaction patterns, menus, icons, objects…
  • Provide a variety of ways of labelling digital information to easily find items again through folders, categories, colours, tagging, icons…

Learning

  • Design interfaces that encourage exploration
  • Design interfaces that constrain and guide users to select appropriate actions when initially learning

Reading, Speaking, and Listening

  • Keep the length of speech-based menus and instructions to a minimum. Research showed that people find it difficult to follow spoken menus with more than three or four options
  • Accentuate the information of artificially generated speech voices, as they are harder to understand
  • Provide opportunities for making text large on a screen without affecting the formatting

Problem-Solving, Planning, Reasoning, and Decision-Making

  • Provide information and help pages that are easy to access for people who want to understand more about how to carry out an activity more effectively, e.g., web searching
  • Use simple and memorable functions to support rapid decision-making and planning, enable users to set or save the preferences.

Cognitive Frameworks

A number of conceptual frameworks have been developed to explain and predict user behaviour based on the theories of cognition. This section has outlines three that focus primarily on mental processes and three others that explain how humans interact and use technologies in the context.

Mental Models

Mental models are used by people when needing to reason about a technology, especially when something unfamiliar/unexpected happened. The more someone learns about a product, the more their mental model develops. With cognitive psychology, mental models are internal constructions of some aspects of the external world that are manipulated, enabling predictions and inferences. The process involves the fleshing out and running of a mental model, and this can involve both unconscious and conscious mental processes, where image and analogies are activated.

Considering this following two scenarios: 1) Your house is centrally heated, it does not have a smart thermostat that can be controlled remotely, when you want to heat the house as quickly as possible, will you set the thermostat to a desired tempter or as higher as possible? 2) When you want to pre-heat an electrical oven to heat a pizza, will you set the temperature to the specified one or as higher as possible?

For the first scenario, most people will set the thermostat to a higher temperature as possible, because they believe that will heat the house more quickly, though the it will not affect actually. For the second scenario, most people will set it to the specified temperature, but some may set it to a higher temperature, though it will not affect as well.

So why do people use erroneous mental models? It seems that people are using a mental model based on the general valve theory, i.e., more is more (e.g., faucets), but it does not work for thermostats, which functions like on-off switch. What seems to happen is that people develop a core set of abstractions about how things work and apply these to a range of devices, regardless of the appropriateness.

Using incorrect mental models is common, e.g., many people will press the button at least twice when waiting the lift, because they think it will ensure the lift will arrive/faster. Many people’s understanding of how technologies and services work is poor, e.g., the Internet, search engine, the cloud…, their mental models are incomplete, easily get confused, and based on inappropriate analogies and superstition. Consequently, they find it difficult to identify, describe, or solve a problem, and they lack of the words or concepts to explain what is happening.

How can UX designers help people to develop better mental models? A major obstacle is that people are resistant to spending time learning about how things work, especially when it involves reading manual/documentation. An alternative is to design technologies to be more transparent, easier to understand how they work and what to do when they don't, this includes:

  • Clear and easy-to-follow instructions
  • Appropriate online help, tutorials, and context-sensitive guidance in the form of videos, chat windows…
  • Accessible background information that enable the user to understand how something works and how to make the most of the functionality
  • Affordances of what actions an interface allows, e.g., swiping, clicking…

The concept of transparency often refers to making intuitive interfaces so that people can simply understand how to do the tasks.

Gulfs of Execution and Evaluation

The gulf of execution and the gulf of evaluation describe the gaps that exist between the user and the interface (Norman; Hutchins).

  • The gulf of execution: the distance from the user to the physical system
  • The gulf of evaluation: the distance from the physical to the user

It suggests that designers and users need to concern themselves with how to bridge the gulfs to reduce the cognitive effort required to perform a task. This can be achieved, on the one hand, by designing user interfaces that match the psychological characteristics of the user, e.g., considering the memory limitations, and, on the other hand, by the user learning to create goals, plan, and action sequences that fit with how the interface work.

Image 25-07-2022 at 15.43.jpg

This framework is still considered useful today, as it can help designers think about whether the design is increasing/decreasing the cognitive load, and whether it is obvious as to which steps to rake for a given task. e.g., KW describes how the gulfs prevented her from understanding and why she could not get the Bluetooth device to connect to the computer due to the inconsistency between the labels of the two similar-looking switches, one showing the current status of the interaction(off), and the other showing what would happen if the interaction were engaged(Add Bluetooth Or Other Device).

More details:

The Two UX Gulfs: Evaluation and Execution

Information Processing

One prevalent metaphor from cognitive psychology is the idea that the mind is an information processor, information is thought to enter and exit the mind through a series of ordered processing stages.

Image 25-07-2022 at 16.04.jpg

The information processing model provides a basis form which to make predictions about human performance. Hypotheses can be made about how long someone will take to perceive and respond to a stimulus (reaction time), and what bottlenecks occur if a person is overloaded with too much information. One of the first HCI models to be derived from the information processing theory was the human processor model, which modelled the cognitive processes. The model predicts which cognitive processes are involved during the interaction to enable the measurement of the completion time. It is still an HCI classic and was found to be a useful tool for comparing different word processors for a range of tasks, but it is more common to understand cognitive activities in the context where they occur, analysing cognition as it happens in the wild. A central goal was to find how structures in the environment can both aid human cognition and reduce cognitive load, the three external approaches are as follows, they are distributed, external, and embodied cognition.

Distributed Cognition

Most cognitive activities involve people interacting with external kinds of representations, such as books, computers, and each other, e.g., when people go home from somewhere, they do not need to remember the detail of the route because they rely on cues in the environment, e.g., turn left at the red house. People always creating external representations not only to help reduce memory load and cognitive cost but also to extend what they can do and think more powerfully.

The distributed cognition approach was developed to study the nature of cognitive phenomena across individuals, artefacts, and internal and external representations. It involves describing a cognitive system that entails interactions among people, the artefacts they use, and the environment where they are working. e.g., an airline cockpit, where the top level goal is to fly the plane, it involves:

  • The pilot, captain, and air traffic controller interacting with one another
  • The pilot and captain interacting with the instruments in the cockpit
  • The pilot and captain interacting with the environment in which the plane is flying (that is, the sky, runway, and so on) Screenshot 2022-07-25 at 17.31.51.png

A primary objective of the distributed cognition approach is to describe how information is represented and re-represented as it moves across individuals and through the artefacts used during activities. These transformations of information are referred to as changes in representational state. Different from the information processing model, this way of describing and analysing a cognitive system focuses on what is happening across a system rather than what is happening inside an individual's head. This kind of analysis can be used to derive design recommendations, suggesting how to change or redesign an aspect of the cognitive system. Distributed cognition could draw attention to the importance of any new design needing to keep shared awareness and information in the system so that individuals can be kept aware and know the changes. It is also the basis for the DiCOT analytic framework for healthcare settings, see Chapter 9.

External Cognition

People interact with information by using a variety of external representations, including books, multimedia, web pages…, tools also developed to aid cognition, e.g., calculator, pens… The combination of external representations and physical tools greatly extended and supported peoples ability to carry out cognitive activities.

External cognition is concerned with explaining the cognitive processes involved when we interact with different external representations, such as graphical images, multimedia, VR. A main goal is to explain the cognitive benefits of using different representations for different cognitive activities, including:

  • Externalising to reduce memory load
  • Computational offloading
  • Annotating and cognitive tracing

Externalising to Reduce Memory Load

Common external representations, e.g., diaries, calendars, reminders are used to reduce memory load of memorising difficult things, e.g., birthday, address. Other kinds of external representations frequently used are notes, e.g., sticky notes, shopping list, where there are placed in the environment can also be crucial. e.g., people intentionally put nots in prominent positions, e.g., on the wall, side of the computer, to ensure they remind what needs to be done. Externalisation, therefore, can empower people to trust they will be reminded without having to remember themselves, thereby reducing memory burden in the following ways:

  • Reminding them to doing something (e.g., remember birthday)
  • Reminding them of what to do (e.g., buy a card)
  • Reminding them of when to do something (e.g., send something later)

This area can be helped by designed technology obviously, and many apps do have done so, e.g., to-do, alarm-based lists. This can also help to improve people’s time management and work-life balance.

Computational Offloading

Computational offloading occurs when we use a tool/device in conjunction with an external representation to help carry out computations. e.g., using a pen and paper to solve a maths problem. e.g., try 21*19, and try XXI * XVIIII ? The latter is much harder unless you’re an expert in using Roman numerals, so different external representation does matter. The kind of tool used can also change the nature of the task as well.

Annotating and Cognitive Tracing

Another way to externalise the cognition is to modify the representation to reflect changes we want to mark. e.g., cross things off a to-do list to indicate completion. There are two types of modification are called annotating and cognitive tracing:

  • Annotating involves modifying external representations, e.g., crossing off items
  • Cognitive tracing involves externally manipulating items into different orders or structures

An example of annotated externalisation can be crossing off items from a shopping list. People may find it hard to remember things to buy when looking at the shelf/fridge, so they externalise it as a written list, it may also reminders other things to buy. Many digital annotation tools allow people to use pens, styluses, fingers to annotate documents, which can be stored and revisit later.

Cognitive tracing is useful in conditions where the current situation is is a state of flux and the person is trying to optimise their positions, typically including:

  • In a card game, continuous rearrangement of cards into suits, in ascending order, or collecting same numbers to determine what to keep.
  • In Scrabble, where shuffling letters around in the tray helps a person work out the best word given the set of letters.

Cognitive tracing has also been used as an interactive function, e.g., using highlights for all nodes revisited, exercises completed to help students know what they have studied in online learning.

A general external cognition approach for interaction design is to provide external representations at an interface that reduces memory load., support activity, and facilitate computational offloading. Different kinds of information visualisations can be developed to reduce the amount of effort required to making inferences about a given topic, e.g., finical forecasting, identifying programming bugs. In so doing, they can extend or amplify cognition, allowing people to perceive and do activities could’t be done elsewhere. e.g., in Chapter 10, infoVis are used to represent big data to make it easier to make cross-comparisons across dimensions and identify patterns and anomalies. Another example can be a pop-up window that guides the users through the interactions, especially when tons of options are available. This reduces memory load and frees up more cognitive capacity for enabling people to complete desired tasks.

Embodied Interaction

Embodied Interaction means the practical engagement with the social and physical environment. It involves creating, manipulating, and making meaning through our engaged interaction with physical things, including mundane objects such as cups, phones… Artefacts and technologies that indicate how they are coupled to the world make it clear how they should be used, e.g., a book left opened on a desk can remind people to complete reading.

Eva Honecker et al. further explain embodied interaction in terms of how our bodies and active experiences shaper how we perceive, feel, and think. This enables us to learn how to think and talk using abstract concepts, the experiences of moving through and manipulating the world since we were born is what enables us to develop a sense of the world at both concrete and abstract level.

Within HCI, embodied interaction refers to how the body mediates various interactions with technology and out emotional interactions. It helps researchers uncover problems arise from existing technologies and also inform new designs of technologies.

DK suggests that a theory of embodiment can provide HCI practitioners and theorists with new ideas and better deigns by explaining how interacting with tools changes the way people think and perceive of their environment. e.g., partially modelling of a dance (aka marking) through abbreviated moves and small gestures are often used by dancers rather than doing a full work out. The reason for for doing it is not for saving energy or preventing getting exhausted, but to enable them to review and explore particular aspects of a movement without much mental complexity. This indicates that people are better to be taught by a process like marking, where learners create models of things or use their bodies to act out. e.g., rather than developing a full fledged virtual environment to learn golf, it might be better to tach sets of abbreviated actions, using AR, as a form of embodied marking.

Summary

This chapter explained the importance of understanding the cognitive aspects of interaction. It described relevant findings and theories about how people carry out their everyday activities and how to learn from these to help in designing interactive products. It provided illustrations of what happens when you design systems with the user in mind and what happens when you don’t. It also presented a number of conceptual frameworks that allow ideas about cognition to be generalised across different situations.

Key points

  • Cognition comprises many processes, including thinking, attention, memory, perception, learning, decision-making, planning, reading, speaking, and listening.
  • The way in which an interface is designed can greatly affect how well people can perceive, attend, learn, and remember how to carry out their tasks.
  • The main benefits of conceptual frameworks based on theories of cognition are that they can explain user interaction, inform design, and predict user performance.