Temporal Contiguity
Sensemaking


Principle in Action
Word to World is avoice-based digital storytelling experience. It applies the temporal contiguity principle by synchronizing audio explanations with corresponding visual representations on the screen. For example, as learners explore the word tyrannosaurus, the app displays the animal visually while simultaneously playing the word’s pronunciation and sound of a tyrannosaurus. This setup minimizes the cognitive effort needed to connect the audio and visual elements, helping learners hold both in working memory and build stronger associations between words and their meanings.
Word to World


As learners explore the word tyrannosaurus, the app displays the animal visually while simultaneously playing the word’s pronunciation and sound of a tyrannosaurus


As learners explore the word tyrannosaurus, the app displays the animal visually while simultaneously playing the word’s pronunciation and sound of a tyrannosaurus
Discuss in your team
In the learning experience you are designing, are there situations where the learners need to understand a concept through both visual and audio elements?
If so, are they currently synchronized in terms of timing?
If not, what are the challenges or limitations that prevents the synchronization from happening and what are some possible solutions?
Principle Definition
Synchronize complementary audio and visual elements to help learners form a connection between the two stimuli. The learner is more likely to be able to build mental connections between verbal and visual representations when applying this principle.
Temporal Contiguity
Sensemaking


Principle Definition
Synchronize complementary audio and visual elements to help learners form a connection between the two stimuli. The learner is more likely to be able to build mental connections between verbal and visual representations when applying this principle.
Principle in Action
Word to World
Word to World is avoice-based digital storytelling experience. It applies the temporal contiguity principle by synchronizing audio explanations with corresponding visual representations on the screen. For example, as learners explore the word tyrannosaurus, the app displays the animal visually while simultaneously playing the word’s pronunciation and sound of a tyrannosaurus. This setup minimizes the cognitive effort needed to connect the audio and visual elements, helping learners hold both in working memory and build stronger associations between words and their meanings.


As learners explore the word tyrannosaurus, the app displays the animal visually while simultaneously playing the word’s pronunciation and sound of a tyrannosaurus
Discuss in your team
In the learning experience you are designing, are there situations where the learners need to understand a concept through both visual and audio elements?
If so, are they currently synchronized in terms of timing?
If not, what are the challenges or limitations that prevents the synchronization from happening and what are some possible solutions?


