EDTECH 503 – Module 7

Leave a comment

One-to-One Evaluation

One-to-One Evaluation is conducted by the designer as validation of the instructional material on a small scale.  Rather than testing the material on a larger sample of the target audience, the selection of three or less members allows for more detailed review of the content and instruction.  In the one-to-one evaluation conducted for my project, two learners will be selected to participate in the evaluation and validation.  Each learner will be addressed individually.  The learner will be given the instruction and related support material for input.  We will seek to clear up any administrative tasks that I; and general computer review support tools (such as Spell-Check), failed to catch as well as identify any areas of confusion/concern.

Questions to ask One-to-One Evaluators:

  • Are the objectives and purpose clear and provide meaning for expectations in your post-class work environment?
  • Does the general structure of the content “make sense”?  Does it flow logically?
  • Are there any terms that need to be defined or further elaborated on?
  • Are there areas that are too “little” or too “much” in the relationship to instructional effort – will the users need/want more or less?
  • Are graphics easily understandable? If not, any suggestions for improvement?
  • Are textual elements easy to read and understand?  Any suggested wording changes?
  • Do activity instructions make sense? Will learners be able to understand and follow them with little assistance from the instructor?

Small Group Evaluation

Once the One-to-One Evaluation is complete, a small group evaluation will be conducted.  A larger sample of the learner population will be queried.  In my case, I will conduct the evaluation with four learners.  Although not required for effective small group evaluation, I will assemble the evaluators together in one setting to afford evaluation of course collaboration opportunities.  The small group evaluation is intended to put the instruction to the test against a real-world sample case.  I will not interfere with the evaluation unless a question/concern arises.

Questions to ask:

  • Were you prepared with enough prior knowledge for the course?  If not, what is the depth and breadth of content that should be incorporated as pre-work or worked into the instruction?
  • If any, what areas of instruction were a “struggle” to complete successfully?  Any suggestions for resolution?
  • Were the motivation factors used effective in piquing your interest and keeping you on task? If not, what could help?
  • Were the instructional activities adequate?  Any suggestions?
    • Too frequent? Not frequent enough?
    • Activities too detailed? Not detailed enough?
  • Upon completion, do you feel you met each objective? If no, which objectives and why?
  • Overall, how do you feel about the instruction?  If “bad” or “poor” how come?  What could improve your satisfaction with the course?

Field Trial

After revisions are made based upon small-group findings, a field trial should be conducted.  This field trial utilizes a larger selection of the target audience and applies the instruction in a realistic setting.  In my work group, we typically call this a “pilot group” or “beta test.”  The field trial helps us determine if the instruction can be effectively applied across a larger sample as intended.  A successful field trial will provide confidence that the instruction should be primarily well-received and successful for the learner population.

Questions to ask to Instructor and Learners:

  • Does the instruction flow logically and seamlessly throughout the process?
  • Any instruction/activities feel “out-of-place” or redundant?
  • Does the instruction adequately fit within the allotted time limit without jeopardizing learning?
  • Are there any administrative or technical issues that arise? How could they be mitigated?
  • Is there any issue with the instructor’s presentation of the material? Suggestions on modifications?
  • Upon completion, do you feel you (each learner) met each objective? If no, which objectives and why?
  • How do you feel about the training?

Additional Questions for Instructor:

  • Was the instruction implemented as designed? What deviations were required?
  • Do you feel the learners had the proper amount of prior knowledge? If not, what did they lack?

Expert Review

The expert review will be conducted by a corporate Learning Management System Subject-Matter expert who has extensive experience with the LMS, eCompass, and supporting the local business unit needs of each system.  I expect to submit my design documents to the review on or around 5/1/2013 and receive feedback no later than 5/6/2013.

Questions I will ask:

  • Are the objectives and purpose clear and provide meaning to the expectations of the learners after the course is complete.
  • Are there any flaws or misconstrued interpretations concerning the content?
  • Do the activities accurately reflect “real-world” circumstances?
  • Does the instruction flow logically?
  • Has the instruction focused on the key elements?  If not, what changes should be made.
  • Is the timeframe realistic for thorough completion of tasks?
  • Will the target audience have required prerequisite skills; if not what pre-class work should be required?

School Evaluation Summary

2 Comments

I enjoyed completing this artifact, even though it was quite intensive. It gave me the opportunity to critically evaluate my organization’s technological capabilities and approach to learning. I was able to identify areas where we currently excel and others where we struggle. See the Maturity Model Benchmark Google spreadsheet and the written summary embedded below.

Completing this assignment meets the AECT 4.2, 5.1, 5.2, 5.3, and 5.4 standards. In this assignment we had to critical evaluate an organization’s technological maturity using a defined set of criteria. I had to identify issues and solutions based upon best practices. This information can be used to form a technology use plan. This task can be performed again at a later date to adapt the technology use plan as needed.

Click here to view the Maturity Model Benchmark Google Spreadsheet