Basic Principles Of Multimedia Learning (EDCI 570/71 Assignment) – Sean, Jeremy, Clay

In the introduction article, multimedia is defined by presenting both words (spoken or printed text) and pictures (illustrations, photos, animation or text). Multimedia teaching is presenting words and pictures that are intended to promote learning by building mental representation from words and pictures, in order to teach a deeper understanding of concepts rather than words alone. There are over 20 principles of multimedia instructional design .

 

Split-Attention Principle

The split attention principle happens when the learner must focus on two or more sources of instructional information simultaneously to understand the material. This in turn adds a stress to the cognitive load of the learner which slows the learning process. For example, a worksheet with instructions on one side of the page and a diagram on the other requires the learner to read the instructions, hold that knowledge, scan the diagram (while still holding that knowledge), and then attempt to apply/combine that knowledge with the diagram. If the concept is too complicated for the learner, their attempts at understanding the information may be slowed or halted.

By integrating separate instructional material into a single form, the cognitive load is lessened, and learning and performance generally increases. To build on the previous example an educator could take the information from one side of the page and break it into parts, applying it to the diagram when and where it is required to be the clearest. This in turn will allow the learner to focus in one place and see where that information will be applied as well.

There has been some interesting research done into the split attention principle. Some surprising research by Sweller and Chandler (1994) and Chandler and Sweller (1996) suggested that students learning to use a computer program who initially learned strictly from using integrated learning techniques and no computers met greater learning outcomes than students who learned using a computer-based information and a manual. This initially sounds counter intuitive to the learning process but demonstrates the effect split attention instruction can have on a learner.

 

Modality Principle 

The modality effect or principal exists when learning occurs through a mixed-mode (partly visual and partly auditory) presentation instead of a single mode of presentation. This creates a balance in the visual and auditory pathways and does not create an extraneous cognitive load.

  • Mousavi, Low and Sweller (1995 )The modality principle works under the same conditions of the split-attention principle.
  • It is essential that the auditory information is necessary to understand the visual information and that it is not redundant (Kalyuga, Chandler and Sweller ( 2000 ).
  • Modality effect depends on the logical relationship between the sources of information and that the information is connected, similar to the split attention effect. 
  • Mayer & Moreno (1998) Modes that over-stimulate the visual or auditory pathways inhibits the learning process.
  • Using graphics and narration in lessons establishes a balance between visual and verbal channels, allowing processing in working memory. 
  • An example of the above, is showing a screenshot of information (graphic) and narrating (auditory) along with it which allows a balance between the visual and verbal processing channels. This allows for essential information to be processed and avoids creating an extraneous cognitive load. 
  • Alternatively, animations (graphic) grouped with only text (visual) can overload the visual channel making it much more difficult for the learner to process the information.   

With teachers using technology in the classroom it is important to keep the modality principle active in your practice. Children can easily become overwhelmed by stimulation from media and by controlling the modes by which information is communicated you increase the opportunities for successful learning. 

 

Redundancy Principle

The redundancy principle effect happens when information is presented in multiple forms simultaneously, such as a picture with words describing said picture. By adding further information to a full body of information learners may become confused.  A further example would be, adding a summarization to a full body of information. Excluding this info may be better for learning, by eliminating redundancy.

Much like the split-attention principle, having to coordinate resources (i.e. visual and audio) requires a heavy cognitive load and may prevent learning success. To prevent redundancy, any repeated information should be removed.

Often, educators feel that presenting information in varying forms is more advantageous (or at least neutral) to having only one. This assumption has been proven incorrect by current research into students and the cognitive loads they can handle during the learning process.  

 

Signaling principle

The signalling principle explains how a signal or cue can have a learner fixate on information that is deemed important or crucial in a topic. Fixating is a way to view how a learner can focus on one particular component of what they are trying to learn in order to lessen their cognitive load during a lesson.

While learning something which is delivered or supplemented with multimedia, a teacher can use the technological properties of the media to cue fixation. For example, a student could be presented with a diagram of an internal combustion engine. The first part of the lesson could be to learn what a piston is. Using an animation or slideshow, the main block of the engine could be greyed-out or fogged and only the details of the piston would remain.

Research was conducted to ascertain which method of multimedia signalling was the most beneficial to learning. Some examples are: Visual paragraphs with colored words, visual paragraphs with narration, pictures with colored portions, pictures with on-screen text bubbles…and so forth. 

The researchers found that there was no ‘perfect’ method for multimedia delivery, and suggested that it was likely that certain delivery methods would have their own best fit for use, depending on the content.

In relation to the theme of the chapter, I found it quite that this principle and research offered some answers to questions about multimedia learning, such as:

  • What are the consequences of adding pictures to words? 
  • What happens when instructional messages involve both verbal and visual modes of learning? 
  • What affects the way that people learn from words and pictures? 

However, I believe the most important part of the signalling principle research was the use of eye-tracking. When possible, researchers tracked the physical position and fixation of participants’ eyes in order to ascertain what cues they responded to. Mayer favors learner-centered approaches in regards to multimedia learning. Perhaps, in subsequent research, the development and improvement of eye tracking software could be used to answer his question “How can we adapt multimedia to enhance human learning?”

 

Click here to watch explanatory video

Leave a Reply

Your email address will not be published. Required fields are marked *