Effective Web Instruction (Chapters 5 and 6)

The final two chapters of Effective Web Instruction deal with actually creating and testing the computer prototype for online instructional products, and also touch on finalizing and maintaining the product. First, the authors provide some practical advice about what to consider when creating the prototype. For instance:

  • Consider the technology that users have available (Word processors, Internet connections, etc.)
  • The various Web scripting languages one can use to create the prototype, and the advantages / disadvantages of each.
  • How will the instructor communicate with students, and students with each other? Consider email, instant messaging, and discussion forums. Keep in mind, however, that chats larger than 8 people are not effective.
  • What technology will you use to deliver the course? There are commercial options available (such as Blackboard); or, you could create your own using a combination of existing Web tools.
  • Consider how you would create games and interactions, and what type of technology might be necessary for them.

The authors continue on to provide practical advice for actually “getting down to work.” They suggest using Web page templates for rapid prototyping. Once you have selected or created a template, the work goes much more quickly, and also makes future changes easier. The authors stress that the prototype and templates should not be “final” at this point, just as the paper prototype was not final. However, the computer prototype should incorporate all of the changes identified in the paper prototype.

In fact, the computer prototype is the chance to test the revisions made in the paper prototype. Gather 4-5 new test subjects and have them go through the computer prototype, following the same testing procedure that was used for the paper testing. It is recommended that you do several rounds of testing. Finally, when all user testing is complete, bug testing and fixing can begin, and the project is on its way to being finished.

My Thoughts

It seemed that Chapter 6 was not completely finished (this is a draft version after all), but these chapters were still useful. I appreciated that the authors went through all of the technical considerations for creating the computer prototype. I was actually surprised by how much detail they went into. Although some of the examples are now outdated (AOL, Netscape), the basic principles are still useful. I think the most useful aspect of these 2 chapters is the overall framework given for creating the computer prototype. Computer technologies will change, but the considerations and general steps given here remain valid despite updates in technology.


Making a Paper Prototype

Snyder’s Making a Paper Prototype is an interesting look into the “nitty gritty” of making a paper-based prototype for software applications. This chapter of his book covers which supplies you need, creating computer screen backgrounds, creating interface widgets, remembering user input, simulating interaction, simulating help functionality, and even tips for trainers and technical writers for how this process can help them. Overall, Snyder’s approach agrees with other readings about creating paper prototype, but he goes into detail and shares his experiences of what techniques have worked, and which have not.

Some highlights that particularly caught my attention include:

  • Don’t use rulers when creating paper prototypes. While it’s not necessary to make a prototype messy, you also don’t want to waste time worrying about perfectly straight lines!
  • Don’t use fine-tip pens in the prototype, as they are difficult for users to read.
  • Use a big poster board for a computer screen background, which helps users feel more familiar with the environment.
  • Creating a paper prototype is actually quite easy. It’s a good idea to have someone very familiar with the software / product act as the “computer” during prototype testing with users.
  • It’s ok to use screen shots in paper prototypes.
  • Some interactions are difficult to simulate, including roll-over menus, right-clicks, and system noises. The best solution is to let the user know these options exist, and they can let you know when they are selecting them. For sounds, make them yourself!
  • Use removable tape to allow users to write their input on the paper interface.
  • Make use of “incredible intelligent help” to see what users have questions about. This simply involves having a knowledgeable member of the design team available to act as a human “help” system if the user runs into problems.

My Thoughts

I found this article to be very enjoyable; I appreciated how detailed it was regarding how exactly to do a paper prototype. I think the idea of a paper prototype of something on the computer strikes people as odd at first– and there are many questions about what exactly a paper prototype means. This article was a great insight into one expert’s experience with paper prototyping.

That said, I found myself thinking that it sounded like a lot more work than simply using a modern wireframe application to design prototypes on the fly– but on the computer instead. It seems quite tedious to prepare all of the different physical elements, arrange them, review them, and then know when to use them based on the user’s actions. It seems that the human acting as the “computer” has to have the business logic of the program memorized. While this is certainly possible, it seems like less work to simply do this on the computer.

Nevertheless, I do see the advantages of paper prototypes. Hopefully computer-based prototyping applications will continue to improve and will eventually include all of the advantages of paper, with half the work.


What Makes e3 (effective, efficient, engaging) Instruction?

David Merrill’s What Makes e3 (effective, efficient, engaging) Instruction? is an interesting look at how the first principles of instruction can be used to bring together what are sometimes seen as separate instructional methods: Problem-centered instruction, peer-interactive instruction, and online instruction. Far from being mutually exclusive, Merrill explains how the first principles can be successfully used to create a seamless course that incorporates all of these elements.

Problem-Centered

Merrill here explains the basis behind why e3 instruction is necessarily problem-centered, as required by his first principles of instruction. Problem-centered should not be confused with problem-based learning. Problem-centered learning begins with the demonstration of a problem, along with instruction on how to complete all of the steps to solve a problem, followed by the students getting to try a new problem on their own. Problem-based instruction, on the other hand, is a less structured method that involves learners attempting to figure out a problem by using available resources. Merrill argues that research has shown this method to be ineffective (p. 3).

Effective Peer-Interaction

Peer interaction is strongly endorsed by Merrill, as he explains that effective peer interaction incorporates all 4 of the first principles (the 5th principle being, of course, the problem-centered basis):

  • Activation through peer sharing
  • Demonstration through peer demonstration
  • Application through collaboration with peers
  • Integration through peer critiques

Online Instruction (Technology-Enhanced Interface)

Finally, Merrill explains how courses based in the first principles of instruction can be successfully enhanced via distance learning. Merrill believes that all courses can and should be available online, as well as in the classroom, with no distinction necessary in terms of course materials or interaction. As an example, Merrill suggests that online courses do the following for effective instruction:

  • Provide a link to the problem being discussed
  • Provide a host of resources for students to use to learn the component parts of the problem
  • Present students with a completed problem (demonstration)
  • Present learners with a new problem to solve
  • Individual students prepare their solutions to a problem
  • Groups are formed via discussion boards in which the individual solutions are discussed; one solution is selected by the group
  • Groups then critique each other’s solutions
  • Groups then revise their solution based on feedback and discussion from other groups

My Thoughts

I always enjoy reading Merrill’s articles, and this one was no exception. I really appreciate his embrace of distance technology, and I completely share his vision of having courses that are available both online and “traditionally” with no real difference between the two, other than the method of delivery. The paper was somewhat short and left me wanting to read about more examples of how this has been implemented in other places. It also left me wondering if an online course or module must include peer interaction in order to have all first principles. I don’t think this is the case, based on other readings in this class; but nevertheless, it did leave me wondering how to fit everything together.


Mager’s Tips on Instructional Objectives

In this summarized version of Mager’s Preparing Instructional Objectives (1984), we learn about the importance of clearly written learning objectives, and how to create those objectives.

What do we mean by objectives? According to the summary:

A learning objective is a description of a performance you want learners to be able to exhibit in order to consider them competent. An objective describes an intended result of instruction, rather than the process of instruction itself. (p. 1)

In short, objectives are goals– they are the raison d’etre of the instructional product. They’re necessary not only for defining the instruction, but also in evaluating the instruction’s effectiveness. Furthermore, the student benefits from clearly presented objectives, as this gives him or her the necessary tools for understanding and achieving what they are required to learn.

Good objectives have four qualities: Audience, behavior, condition, and degree.

Audience

This is self-explanatory: For whom is the instruction intended?

Behavior

The key take-away here is that an objective must contain an observable behavior. You should be able to see or hear this behavior; it is not something you can merely assume. This does not mean that mental processes are not observable; it simply means that the instructor must ask the correct questions to elicit an observable behavior–and thereby assess whether the objective has been met. The bottom line is that objectives always describe “what the learner will be DOING” (p. 4).

Condition 

What are the conditions under which the behavior specified in the objective should be observed? For example, you might say, “Given a set of 5 choices, the learner should…”  The given is the condition.

Degree

The degree simply implies the extent to which the learner is expected to learn the material to meet the objectives. For example, does the student have to answer 9 out of 10 questions correctly? Or only 5?

Finally, the summary presents common mistakes to be avoided when creating instructional objectives. Some common mistakes include creating “false” performances, in which there is no observable behavior defined; including information on how instruction will be given, rather than what the objective of the instruction is; and including objectives for the instructor, rather than for the students only.

My Reflection

Overall I found this reading to be helpful. After reading it the first time, I immediately created my objectives for my individual project based on what I thought I understood; now that I’m reading through this again for my blog post, I think I missed some of the key ingredients in my objectives. For example, I’m thinking here of the degree aspect. This is one area of the summary that confused me a bit. Is Mager suggesting that each objective must contain a specific degree (such as a quiz percentage)? I can certainly see the value in this; my only thought, however, is that it strikes me as being particularly detailed at what should be a relatively early stage in the design process.


Effective Web Instruction: Chapters 3&4

Summary

Chapters 3 and 4 of Effective Web Instruction: Handbook for an Inquiry-Based Process take us through the prototyping process. First, the authors give us a detailed overview of the process of creating a prototype. Then, we learn about testing the prototype with actual subjects.

Creating a Prototype and Preparing for Testing

What exactly is a prototype? According to the authors, “Your prototype is an approximation of what the instruction might eventually be” (p. 19). Basically, the prototype is a rough sketch of your final product, but it contains enough essential material to be used for testing the effectiveness of the planned product. Interestingly, though the instructional product is intended for the Web, the authors strongly recommend that the initial prototype be done on paper. The goal of this initial prototype is to see how users understand and interact with your content. Do they understand it? This core goal can be easily lost if a computer prototype is used at this early process. Additionally, the authors assert that paper prototypes have more of a “draft” feel, which encourages candid responses from test subjects (since they know you haven’t wasted too much time if there are major revisions recommended). In addition to actually creating the paper prototype, you should take steps at this point in the process to plan for assessments and methods of gauging learner satisfaction, which you will use in the testing phase (p. 35).

Testing a Prototype

In this chapter, the authors take the reader step-by-step through the prototype testing process, and they even include helpful illustrations to bring the process to life. Everything is included, from how to create a test plan, to selecting subjects, to detailed instructions on actually carrying out the test–including greeting the subjects!–as well as an overview of the data analysis phase. As usual, the authors’ advice here is practical and helpful, and includes tips such as keeping a normal distance from the subjects you’re observing so as not to make them anxious. This chapter is also full of helpful tips about how to make sure you get the subjects’ real responses; don’t just wait for the subject to ask a question–be prepared to ask the subject questions to understand his or her thought process. This information is invaluable for a well-designed instructional product.

My Analysis

As with the first two chapters of this book, I thoroughly enjoyed chapters 3 and 4. As someone who has never created and tested a prototype before (at least not this formally), I found that these chapters went into enough detail that I actually feel that I’ve had some experience doing it! Although it may sound silly, I really appreciated the illustrations of the actual testing process. If you’ve never done it before, it’s difficult to picture what it actually entails, even with written descriptions. For instance, even after the authors described the process, I had a mental image of the observer standing up and walking around the room. In the illustration, however, the observer is sitting next to the subject. Seeing this set up made the reading immediately make even more sense to me.

Despite the excellent writing and explanations, I did find that I had a few questions while reading. Regarding the paper prototype, I wondered if the authors would be opposed to some of the newer computer prototyping applications that allow for truly minimal work. However, as I thought more about it, I decided that I can still see the value in a paper prototype first– paper allows you to truly focus on the core–the content–and worry about the “look and feel” later.

One other question I had was regarding the test subjects, and I’m wondering if someone might know the answer: The authors give fairly detailed advice on selecting subjects, but it was unclear to me who these subjects should be. For instance, are these family members? Coworkers? The general public? I was hoping the authors would go into a bit more detail here, and address some additional practical questions– for instance, are the subjects paid? What is considered a standard payment?


Nine Ways to Reduce Cognitive Load in Multimedia Learning

ARTICLE SUMMARY

Richard Mayer and Roxana Moreno present an interesting set of nine specific techniques to improve multimedia learning by reducing cognitive load for learners. These techniques are the result of 12 years of research and are based on theories of cognitive processing (p. 50). The authors note that their research and subsequent theories and prescriptions assume three important ideas about how the mind operates:

  • We have two separate processing streams—one for visual information, and one for verbal information.
  • We have a “limited capacity” for processing these types of information.
  • We combine our verbal and visual streams together to process information, which can require a lot of our mind resources.

The authors also identify three types of “cognitive demands” (p. 45):

  • Essential processing – as the name implies, this processing is essential for understanding the information being presented.
  • Incidental processing – this type of processing occurs in relation to non-essential information that we may incidentally take in, such as background music (p. 45).
  • Representational holding – this type of processing occurs when we are required to “hold” information in our minds until we can link it with subsequent information.

With this basic framework established, the authors go on to explain five common “types” of cognitive overload that occur within multimedia learning presentations. The authors give one or two solutions for each of the five types of cognitive overload.

Type 1 Overload – This occurs when one of the two processing “channels” (verbal or visual) is “overloaded with essential processing demands” (p.45). The authors recommend that some of the processing demand be “offloaded” to the other channel. For example, if the visual channel is overloaded, find a way to move some of the visual information to the verbal channel.

Type 2 Overload – This occurs when both channels “are overloaded with essential processing demands” (p. 47). Two solutions are recommended for this type of overload: Break the presentation into learner-controlled manageable segments, or, extract some of the presentation information and move it into its own “pre-training” session, ensuring that learners are already familiar with some of the information when the presentation is given.

Type 3 Overload – This occurs when incidental information is mixed with essential information, thus creating too much demand on one or both channels. One solution is to remove information that is perhaps interesting but not essential. Another solution, called “signalling,” is to use various cues to direct the learner’s focus in the appropriate areas, thus ensuring that the learner is not “wasting” cogntiive capacity on non-essential information (p. 48).

Type 4 Overload – This occurs when the brain must engage in incidental and essential processing, not because incidental information is included, but rather because essential information is presented poorly, thus forcing the learner to “waste” processing capacity to make sense of it. The authors recommend two possible solutions for this problem. First, they recommend that words and images appear together in an understandable way, thus eliminating the need for the learner to connect the two. Another solution is to streamline information by eliminating redundancy; for example, the authors advise that if a presentation is narrated, a transcript should not also appear on the screen.

Type 5 Overload – This type of overload is the result of essential processing combined with what the authors call “representational holding” (p. 49). Representational holding occurs when a learner must “hold” some piece of information in his mind, so he can then pair it with a subsequent piece of information. This often occurs, for example, when an image is presented on one screen, followed by an explanation of the image on the following screen. The best solution in this situation is to synchronize paired information, thus eliminating the need to “hold” information in the mind for further screens. When this solution is not possible, the authors advise that high-spatial learners generally perform well even without synchronization, but low-spatial learners do not, so it is important to understand one’s audience.

In summary, the authors conclude that, “multimedia instruction should be designed in ways that minimize any unnecessary cognitive load” (p. 50).

MY ANALYSIS

As someone who has to create self-paced e-learning tutorials from time to time at work, I found this article very interesting and definitely helpful. I was tempted at first to say that many (or most) of the findings presented here are “common sense.” However, I don’t think that would be entirely accurate. First, many people don’t consider the cognitive aspect of how people take in information; all one has to do is attend virtually any training PowerPoint presentation at any company to see that this is true! Secondly, even if some of these findings are potentially “common sense,” that “common sense” is now proven by scientific research. This research, in my opinion, is what makes these findings so great. I don’t doubt that instructional designers would perhaps know many of these things intuitively, but research gives us the confidence to move forward with proven techniques, thus reducing greatly our need for trial and error.


Book Review – Effective Web Instruction: Handbook for an Inquiry-Based Process (Chapters 1 and 2)

by Ted Frick and Elizabeth Boling, 2002

Summary of Chapters 1 and 2

Frick and Boling’s Effective Web Instruction: Handbook for an Inquiry-Based Process presents a simple yet powerful “process of design and development for web-based instruction” (p. 2).

Although the book is about designing instruction that is firmly web-based, Frick and Boling inform the reader up front that the focus is very much on creating a product that is effective and useful for learning; for the details of authoring HTML and the like, the authors recommend outside resources—or hiring some help!

Frick and Boling begin by covering the basics in a conversational (and often times humorous) tone, as they guide the reader in laying the groundwork for developing a web-based instructional product. They provide a helpful diagram of the entire process up front, and even lay out all the different areas of skill required to create the product—complete with a description of which types of people would be best suited for which position! The first two chapters then go on to address:

  • Developing the instructional goals(p. 7)
    • The reader is encouraged to work with the project stakeholders to develop the instructional goals. Most importantly, the authors emphasize that learners are certainly stakeholders—perhaps the most important!
  • Forming the instructional goals and identifying their indicators (p. 10)
    • The authors give the refreshingly practical advice of thinking about the end assessment to help form the instructional goals. Thinking about the assessment obviously helps when determining the indicators as well.
  • Conducting the learner analysis(p. 14)
    • This section gives the reader practical ways to figure out how to know what the learner needs to know, what his questions will be, where he needs more help, etc. One fantastic recommendation is to actually teach the learners without the web first, to see where the difficulties lie, what questions the learners have, etc.
  • Conducting the context analysis(p. 16)
    • Here, the reader is challenged to think about why he or she is building a web course. The authors stress that learning is more important than a medium; we should have clear, valid reasons for using the web as an instructional tool.

My Review

Although I am only 2 chapters into this book, I must say—this is one of my favorite readings so far. I appreciate everything about it: the conversational writing style, the fact that nothing is taken for granted and all the basics are explained, the helpful diagrams, and the extra advice on the side, which sounds like it’s coming from an experienced pro who’s taken you under his wing and is giving you the inside scoop!

I particularly liked the section in which the authors challenge the reader to consider why he or she is developing an instructional product on the web in the first place. And, if the reader has no good reasons, the authors recommend using another medium. Whoa! That’s not something you would expect to find in a book dedicated to explaining how to create web instruction! But it’s just the kind of advice that is needed to encourage critical thinking and self-examination that ultimately leads to a better web product. The authors certainly are not questioning the legitimacy of web instruction—not at all. Instead, they are turning the focus of the reader (the instructional designer) exactly where it needs to be—on designing effective instruction. The web is a helpful means, but it is not the end itself.

Although I am just beginning this book, I have a sense that the focus throughout the book will continue to on what really matters—designing effective instruction using the web as a means, but with the spotlight squarely on effective learning design, not web coding. And with the plethora of HTML and web design books on the market today, the world is not missing another one. But a book written by educational experts explaining effective web instruction from a solid foundation in learning theory? Bring it on!


Merrill’s 5 Star Instructional Design Rating

M. David Merrill’s 5 Star Instructional Design Rating presents an easy (and fun) way to gauge the extent to which various instructional products meet (or fail to meet) Merrill’s “first principles” of design. As a fan of Merrill’s first principles, I found this paper extremely helpful for understanding how Merrill himself believes the first principles should be concretely implemented in instructional products.

The rating system itself is simple to use, yet provides a detailed set of criteria by which to critique a product’s incorporation of the first principles. I did find myself wishing that a few of the criteria were more clearly explained, but I’m sure I’ll better understand as I continue our excellent readings in this course!

Instructional Product Reviews

I used Merrill’s 5 Star Instructional Design Rating to rate two real instructional products. Merrill provides five main criteria by which to evaluate instructional design, and each of the five has three specific criteria within it. A gold star is given when all three “sub-criteria” are met; a silver star when two are met; and a bronze star when only one is met.

 Review 1: Understanding Creditor Statements

Is the courseware presented in the context of real-world problems? 

I gave this a gold star for achieving all three criteria. The entire course is presented in the context of helping customers (of a credit relief company) understand their credit card statements. The focus is kept on the overall problem, while taking the learner through the various sub-tasks that must be learned. Real examples and scenarios are used throughout.

Does the course attempt to activate relevant prior knowledge or experience?

This section receives a silver star. While it could be stronger in this area, the course does make an effort to have learners think about their own finances and credit card statements as a springboard into some of the learning modules.

Does the courseware demonstrate (show examples) of what is to be learned?

I gave this a gold star for achieving all three criteria. The course provides numerous demonstrations of actual credit card statements and smartly uses technology to allow the learner to interact with the statements based on the learning goals. Moreover, different types of demonstrations / samples are used.

Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?

This course receives a silver star here. Learners are given multiple opportunities to practice via quizzes, samples, and even a game; however, I could not find any type of context-sensitive help or other “coaching” mechanism.

Does the courseware provide techniques that encourage learners to integrate the new knowledge into their everyday life?

I gave this section a bronze star, as I believe it only meets one out of the three total criteria in this category. At least one section of the course asks users to reflect on their response before checking the answer. I did not, however, find any opportunities for “publicly” demonstrating the knowledge or finding creative ways to use the knowledge (Merrill p. 2).

Review 2: Tulane’s Entrepreneur CBT

Is the courseware presented in the context of real-world problems?

I gave this a silver star for achieving 2 of 3 criteria. I believe the introductory story of the pig farm shows the learner what they will be able to achieve (their own business) in the context of a real situation (the pig farm in Cambodia). The course also seemed to qualify for involving a “progression of problems rather than a single problem,” because the course deals with many aspects (and mini-problems) associated with starting a business (Merrill).

Does the course attempt to activate relevant prior knowledge or experience?

This section receives a silver star. In the last section of the course, learners are asked to reflect on their past work experiences and how those experiences may help them start their new business. There is a textbox into which users are asked to enter their responses. I believe this fulfills two of the three of Merrill’s criteria for this category.

Does the courseware demonstrate (show examples) of what is to be learned?

I gave this a silver star for achieving 2 of 3 criteria. Merrill’s criteria requires that the demonstrations are consistent with the content being taught; this course uses real-world examples of success stories to accompany the general information being presented. The second criteria I believe this course fulfills is related to the first: the media is “relevant to the content and used to enhance learning” (Merrill).

Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?

This course receives a bronze star here. While this course does require “learners to recall or recognize information,” it does not engage learners in any sequence of problems, nor does it provide any coaching or context-sensitive help (Merrill).

Does the courseware provide techniques that encourage learners to integrate the new knowledge into their everyday life?

I gave this section a bronze star, as I believe it only meets one out of the three total criteria in this category. While I could not find any example of the course providing learners an opportunity to publicly demonstrate their knowledge, nor an opportunity to reflect on the new knowledge, the course does encourage learners to use the knowledge in personal ways. I’m thinking primarily of the “Your Own Business” section, in which learners are asked to think of (and type out) different requirements they need to consider for starting their business.


Summary and Review: Changes in Motivation During Online Learning

 Changes in Motivation During Online Learning (Kim and Frick, 2011) presents the background and findings of a study conducted with 800 adult e-learners that investigated “learner motivation in self-directed e-learning courses (SDEL)” (pp. 7-8).

The authors begin by pointing out that there has been an “e-learning boom” in recent years, but that it has been accompanied by an “understandable concern about the quality of e-learning” (p. 2). Kim and Frick go on to briefly survey some of the literature on the subject, which attempts to explain how student motivation is affected by such factors as “cognitive overload,” relevance, familiarity with the technology, and the external environment, among others (p. 3).

The remainder of the article is devoted to presenting their study of learner motivation, which provides an interesting look at the specific reasons behind why learners succeed or fail in self-directed online learning environments. The study itself was very focused and sampled 800 adult learners who had taken stand-alone, self-paced e-learning courses delivered online. The major findings as reported by Kim and Frick can be summarized as follows:

  • The vast majority of adult learners (94%) chose SDEL courses because of the relative availability of online courses versus traditional courses, and because of the time flexibility offered by the online format. (p. 10)
  • Most learners reported that their motivation at the beginning of the course was high, and this motivation did not significantly change throughout the course. (p. 11)
  • Learners are likely to be motivated to begin a course if they perceive the course as being relevant to their needs or wants. (p. 13)
  • Learners’ age and technological expertise also positively impacted learning motivation in SDEL courses. (p. 13)

The authors conclude their study by recommending eight specific design principles for maintaining or improving learner motivation.

My Analysis and Review

I found this study very fascinating and personally relevant, as I sometimes have to create short stand-alone e-learning products at work. The eight specific design principles provided as a result of the study findings will no doubt prove very helpful in future projects.

The findings the authors report are consistent with my own experience as a learner; I definitely agree with their statement that learner motivation is highly dependent on perceived relevance. Perceived relevance was my primary reason for choosing the IST program at IU, rather than instructional design programs from other schools. Because I believe IU’s courses are relevant to my professional goals, I am definitely highly motivated to achieve and continue through the program.

In some ways, I found myself disappointed that perceived relevance was one of the primary factors in learner motivation, because it seemed to me (at first) that it’s a factor outside of our control as designers. I would love to be able to create a course (or stand-alone product) that I could make relevant to anyone! Yet, now that I’ve reflected on this a little further, I think this finding is actually quite helpful—it means that we, as instructional designers, should put more effort into determining exactly what our learners need within a particular field. In this way, we will be sure to make products that are more relevant to learners within an already-identified area.


Article Summary and Review: Prescriptive Principles for Instructional Design

In the article Prescriptive Principles for Instructional Design, David Merrill, Matthew Barclay, and Andrew van Schaak present an overview of the First Principles of Instruction (Merrill, 2002) and subsequently correlate these principles to more recent and more specific instructional principles presented by various experts in the instructional design field.

Article Part I: Review / Overview of Merrill’s First Principles of Instruction

The first half the article nicely presents a concise overview of Merrill’s first principles of instruction. The first principles have the following five key characteristics:

  • Task-centered. Good instruction should be based on tasks that fit together to solve real-world problems.
  • Activation principle. Learners should be able to call on previous related knowledge to help them understand and organize the new information they are being asked to learn.
  • Demonstration principle. Learners do better when they can first observe a demonstration of the “final product” or skills that they have been asked to learn.
  • Application principle. Good instruction allows learners to apply their new knowledge in a coached setting that provides immediate feedback, and decreases as they become more confident.
  • Integration principle. The newly acquired knowledge or skills will better “stick” with the learners if they are asked to integrate the knowledge or skills into their everyday lives; or, if they are asked to discuss, defend, or demonstrate the skills, particularly in a public setting.

The five “first principles” of instruction are prescriptive; that is, they are always true regardless of the type of instructional program, practice, or medium. Learning will always be enhanced when these first principles are used. Conversely, learning will be less effective whenever these principles are not included in instruction. Finally, the first principles are design-oriented; that is, they are prescriptions for instructional designers to use when creating instruction.

The authors readily point out that the first principles are widely known and accepted, and have been around in some form for at least the past 200 years. Nevertheless, the authors point out that surprisingly few instructional products incorporate all—or even some—of these principles. The authors give examples of real-world studies and applications that prove the effectiveness of the first principles when incorporated into various instructional products.

Article Part II: Overview of Recent Instructional Principles and How the First Principles Correlate

The second half of the article is devoted to briefly touching on more recent instructional strategies or principles that have been put forth by various experts in the instructional design field. In most of the examples reviewed by the authors, these more recent instructional principles are specific strategies for particular methods of instruction. The authors, through use of comparison tables, attempt to show how the first principles correlate to each aspect of all of these discussed instructional strategies. Some of the discussed principles include:

  • Principles for Multimedia Learning
  • Principles for e-Learning
  • Cognitive Training Model
  • 4C/ID Instructional Design

The authors readily admit that some of these principles go into much more detail than the first principles; they point out, in particular, that some of the discussed principles include specific principles regarding the implementation of instruction. The authors candidly note that the first principles do not address implementation, but then suggest that perhaps they should.

The authors go on to note that, of all the “other” instructional principles discussed, the 4C/ID model incorporates most completely and follows most closely the first principles (that is, it correlates most closely). The authors contend that this is because the 4C/ID model is whole-task-centered and focuses on progressing from one task to an increasingly complex task until the whole task is mastered, just as prescribed by the first principles.

Finally, the article briefly discusses the “Pebble in the Pond” approach to instructional design/development, as it allows instructional designers to easily incorporate the first principles into their designs.

Article Part III: Authors’ Conclusion

The article concludes that “considerable agreement exists” among the various instructional principles, and that most of them (that is, the various other principles) agree with the first principles of instruction. The authors go on to note that the research indicates that the first principles result in improved learning, but that too many instructional products ignore these principles. The authors conclude by calling for more research into the first principles to test their effectiveness in new settings; but in the meantime, they state, these principles should be widely implemented in instructional products.

My Analysis and Conclusion

Overall, I enjoyed the article and found it to be immensely helpful in understanding the basic foundations of good instructional design, as well as a good introduction to the various trends in instructional design principles and research. I found that the first principles are broad, but purposefully so, and I do indeed agree with the conclusion that incorporating the first principles into any instructional product would enhance its effectiveness.

I agree with the authors that many (or most) of the other instructional principles they reviewed in the article agree fundamentally with the first principles; however, the other principles they reviewed were much more detailed and focused on specific instructional genres (e.g., Principles for e-learning). The authors attempt to make a correlation between the very specific principles in these strategies and the very broad first principles. While I generally found this helpful and agreed with most of the correlations, there were several that I found somewhat problematic. For example, in Table 14.2 (p. 179, Allen’s e-Learning Principles Aligned with Merrill’s First Principles), the authors attribute the first principle of Demonstration to the following of Allen’s principles:

  • “Use an appealing context; novelty, suspense, fascinating graphics, humor, sound, music, and animation all draw learners in when down well.”

In my opinion, Allen’s prescription is not so much a principle as a general technique, and it is difficult to correlate it to one of the first principles. It seems that this technique could be effectively used within any of the first principles to enhance an e-learning product, but it does not necessarily correspond to a specific first principle.

My Conclusion

The first principles are absolutely indispensible foundations for effective, efficient, and engaging instruction, but they will not necessarily correlate exactly to specific principles contained in specific instructional strategies. Other specific instructional strategies should use the first principles as the foundation, and then build more specific protocols and technique suggestions on top of the firm foundation of the first principles.