Identify the sample for the given problem. choose the correct answer below.

 

In medicine, it's easy to understand the difference between treating the symptoms and curing the condition. A broken wrist, for example, really hurts! But painkillers will only take away the symptoms; you'll need a different treatment to help your bones heal properly.

But what do you do when you have a problem at work? Do you jump straight in and treat the symptoms, or do you stop to consider whether there's actually a deeper problem that needs your attention? If you only fix the symptoms – what you see on the surface – the problem will almost certainly return, and need fixing over, and over again.

Click here to view a transcript of this video.

However, if you look deeper to figure out what's causing the problem, you can fix the underlying systems and processes so that it goes away for good.

Root Cause Analysis (RCA) is a popular and often-used technique that helps people answer the question of why the problem occurred in the first place. It seeks to identify the origin of a problem using a specific set of steps, with associated tools, to find the primary cause of the problem, so that you can:

  1. Determine what happened.
  2. Determine why it happened.
  3. Figure out what to do to reduce the likelihood that it will happen again.

RCA assumes that systems and events are interrelated. An action in one area triggers an action in another, and another, and so on. By tracing back these actions, you can discover where the problem started and how it grew into the symptom you're now facing.

You'll usually find three basic types of causes:

  1. Physical causes – Tangible, material items failed in some way (for example, a car's brakes stopped working).
  2. Human causes – People did something wrong, or did not do something that was needed. Human causes typically lead to physical causes (for example, no one filled the brake fluid, which led to the brakes failing).
  3. Organizational causes – A system, process, or policy that people use to make decisions or do their work is faulty (for example, no one person was responsible for vehicle maintenance, and everyone assumed someone else had filled the brake fluid).

RCA looks at all three types of causes. It involves investigating the patterns of negative effects, finding hidden flaws in the system, and discovering specific actions that contributed to the problem. This often means that RCA reveals more than one root cause.

You can apply RCA to almost any situation. Determining how far to go in your investigation requires good judgment and common sense. Theoretically, you could continue to trace the root causes back to the Stone Age, but the effort would serve no useful purpose. Be careful to understand when you've found a significant cause that can, in fact, be changed.

The Root Cause Analysis Process

RCA has five identifiable steps.

Step One: Define the Problem

  • What do you see happening?
  • What are the specific symptoms?

Step Two: Collect Data

  • What proof do you have that the problem exists?
  • How long has the problem existed?
  • What is the impact of the problem?

You need to analyze a situation fully before you can move on to look at factors that contributed to the problem. To maximize the effectiveness of your RCA, get together everyone – experts and front line staff – who understands the situation. People who are most familiar with the problem can help lead you to a better understanding of the issues.

A helpful tool at this stage is CATWOE. With this process, you look at the same situation from different perspectives: the Customers, the people (Actors) who implement the solutions, the Transformation process that's affected, the World view, the process Owner, and Environmental constraints.

Step Three: Identify Possible Causal Factors

  • What sequence of events leads to the problem?
  • What conditions allow the problem to occur?
  • What other problems surround the occurrence of the central problem?

During this stage, identify as many causal factors as possible. Too often, people identify one or two factors and then stop, but that's not sufficient. With RCA, you don't want to simply treat the most obvious causes – you want to dig deeper.

Use these tools to help identify causal factors:

  • Appreciation – Use the facts and ask "So what?" to determine all the possible consequences of a fact.
  • 5 Whys – Ask "Why?" until you get to the root of the problem.
  • Drill Down – Break down a problem into small, detailed parts to better understand the big picture.
  • Cause and Effect Diagrams – Create a chart of all of the possible causal factors, to see where the trouble may have begun.

Step Four: Identify the Root Cause(s)

  • Why does the causal factor exist?
  • What is the real reason the problem occurred?

Use the same tools you used to identify the causal factors (in Step Three) to look at the roots of each factor. These tools are designed to encourage you to dig deeper at each level of cause and effect.

Step Five: Recommend and Implement Solutions

  • What can you do to prevent the problem from happening again?
  • How will the solution be implemented?
  • Who will be responsible for it?
  • What are the risks of implementing the solution?

Analyze your cause-and-effect process, and identify the changes needed for various systems. It's also important that you plan ahead to predict the effects of your solution. This way, you can spot potential failures before they happen.

One way of doing this is to use Failure Mode and Effects Analysis (FMEA). This tool builds on the idea of risk analysis to identify points where a solution could fail. FMEA is also a great system to implement across your organization; the more systems and processes that use FMEA at the start, the less likely you are to have problems that need RCA in the future.

Impact Analysis is another useful tool here. This helps you explore possible positive and negative consequences of a change on different parts of a system or organization.

Another great strategy to adopt is Kaizen, or continuous improvement. This is the idea that continual small changes create better systems overall. Kaizen also emphasizes that the people closest to a process should identify places for improvement. Again, with Kaizen alive and well in your company, the root causes of problems can be identified and resolved quickly and effectively.

Root Cause Analysis is a useful process for understanding and solving a problem.

Figure out what negative events are occurring. Then, look at the complex systems around those problems, and identify key points of failure. Finally, determine solutions to address those key points, or root causes.

You can use many tools to support your RCA process. Cause and Effect Diagrams and 5 Whys are integral to the process itself, while FMEA and Kaizen help minimize the need for RCA in the future.

As an analytical tool, RCA is an essential way to perform a comprehensive, system-wide review of significant problems as well as the events and factors leading to them.

Click on the button below to download a template that will help you log problems, likely root causes and potential solutions. Thanks to Club member weeze for providing the basis for this.

Download Worksheet

This information applies only to the Ultra Course View.

Question analysis provides statistics on overall performance, assessment quality, and individual questions. This data helps you recognize questions that might be poor discriminators of student performance. Question analysis is for assessments with questions. You can run a report before all submissions are in if you want to check the quality of your questions and make changes.

Uses for question analysis:

  • Improve questions for future assessments or to adjust credit on current attempts
  • Discuss assessment results with your class
  • Provide a basis for remedial work
  • Improve classroom instruction

Example:

After the question analysis, you notice that the majority of students answer one question incorrectly. Why the low success rate?

  • Is the wording of the question confusing?
  • Are the answer options unclear?
  • Were students given the right content to learn to successfully answer this question?
  • Was the content to learn easily accessible and clear?

Based on what you discover, you can improve the question to truly assess what students know or don't know.

Access an assessment's analysis

You can run and access a previous question analysis report from these course areas:

  • Course Content page > assessment's menu
  • Course Analytics page > Question Analysis tab—if your institution has enabled analytics
  • Gradebook, list or grid view

On the Course Content page, access an assessment's menu and select Question Analysis. You can also select the Analytics icon on the navigation bar.

You can also run a question analysis report from the gradebook in grid or list view. Access an assessment's menu and select Question Analysis.

Question Analysis page

The Question Analysis page is only accessible from the navigation bar > Analytics > Course Analytics page > Question Analysis tab.

You can run a report on an assessment with submissions and no questions, but you'll receive a report with no usable information.

You'll receive a message that the question analysis report is in process and an email when the report is complete. You can leave the page to work in other areas of your course and return later to see if the report is ready.

Status column

Each assessment in your course appears with one of these statuses:

  • Report in progress
  • Completed on {date}
  • Data no longer up to date: Assessment now has more submissions to analyze.
  • Not enough data: No submissions exist. Run Report is disabled.
  • No questions in the assessment: Assessment has no questions or submissions. Run Report is disabled.
  • No status listed: Assessment has questions and submissions, but you've run no report. Run Report is enabled.
  • Error: Run

After you run a report, you can view overall summary information and details about each question. Select the assessment on the Question Analysis page to view the summary.

Only submitted attempts are used in calculations. When attempts are in progress, those attempts are ignored until they're submitted and you run the analysis report again. Automatic zeros assigned for late work aren't included in calculations.

  1. Summary of statistics for the individual assessment:
    • Average score: The score shown is the average score reported for the assessment in the gradebook. The average score can change if more attempts are submitted and graded.
    • Possible questions: The total number of questions in the assessment.
    • Completed attempts: The number of submitted assessments.
    • Average time spent: The average completion time for all submitted attempts.
  2. Rerun a report or edit the assessment to make a change to questions.
  3. Use the graphs to filter the table of questions. Make selections in both graphs to refine your search. If you make no selections, all the questions appear in the table at the bottom of the page.
    • Discrimination: Indicates how well questions differentiate between students who know the subject matter and those who don’t.
      • Shows the number of questions that fall into these categories:
        • Good (greater than 0.3)
        • Fair (between 0.1 and 0.3)
        • Poor (less than 0.1) categories
        • Can't calculate: A question's difficulty is 100% or all students received the same score on a question.
      • Questions with discrimination values in the Good and Fair categories differentiate between students with higher and lower levels of knowledge.
      • Questions in the Poor category are recommended for review.
    • Difficulty: Percentage of students who answered the questions correctly
      • Shows the number of questions that fall into these categories:
        • Easy (greater than 80%)
        • Medium (between 30% and 80%)
        • Hard (less than 30%)
      • Questions in the Easy or Hard categories are recommended for review.
  4. Select a heading to sort the questions. For example, sort the Review column so questions that need review appear first.
  5. Clear Filters: Clear the filters you selected in the graphs and display all questions in the table.
  6. Download the question analysis report

The questions table provides analysis statistics for each question in the assessment. After you use the graphs to filter the questions table, you can view and sort the results.

In general, good questions fall in these categories:

  • Medium (30% to 80%) difficulty
  • Good or Fair (greater than 0.1) discrimination values

In general, questions recommended for review fall in these categories. They may be of low quality or scored incorrectly.

  • Easy ( > 80%) or Hard ( < 30%) difficulty
  • Poor ( < 0.1) discrimination values

Reminder: If you make no selections, all the questions appear in the table at the bottom of the page.

To investigate a specific question, select the title and review the question details.

Information for each question appears in the table:

  • Needs review: Triggered when discrimination values are less than 0.1. Also, when difficulty values are either greater than 80% (the question was too easy) or less than 30% (the question was too hard). Review the question to determine if it needs revision.
  • Question Modified: Displays Yes if you run a report, then change part of a question, and rerun the report. Yes also appears if you copied the question from another assessment when you created the assessment.

    If Yes appears in the Question Modified column for a question, the Yes doesn't carry over when you archive and restore the course.

  • Discrimination: Indicates how well a question differentiates between students who know the subject matter and those who don't. A question is a good discriminator when students who answer the question correctly also do well on the assessment. Values can range from -1.0 to +1.0. A question is flagged for review if its discrimination value is less than 0.1 or negative. Discrimination values can't be calculated when the question's difficulty score is 100% or when all students receive the same score on a question.

    Discrimination values are calculated with the Pearson correlation coefficient. X represents the scores of each student on a question and Y represents the scores of each student on the assessment.

    These variables are the standard score, sample mean, and sample standard deviation, respectively:

  • Difficulty: The percentage of students who answered the question correctly. The difficulty percentage is listed along with its category: Easy (greater than 80%), Medium (30% to 80%), and Hard (less than 30%). Difficulty values can range from 0% to 100%. A high percentage indicates the question was easy. Questions in the easy or hard categories are flagged for review.

    Difficulty levels that are slightly higher than midway between chance and perfect scores do a better job differentiating students who know the tested material from those who don't. High difficulty values don't assure high levels of discrimination.

  • Graded Attempts: Number of question attempts where grading is complete. Higher numbers of graded attempt produce more reliable calculated statistics.
  • Average Score: The score that appears is the average score reported for the assessment in the gradebook. The average score might change after all attempts are graded.

You can investigate questions flagged for your review and view student performance. From the Question Analysis questions table, select a linked question title to access the question's summary.

  1. After you access a question, use the question title's menu to access any question in the assessment. You can navigate to other questions sequentially on either side of the page.
  2. The summary table displays statistics for the question.
  3. Select Edit Assessment to access the assessment and make changes.
  4. The question text and answer choices appear. You can see how many students chose each answer choice or the percentage answered correctly. For example, for a Matching question, you see what percentage of students matched the pairs correctly. Only the question text appears for Essay questions.

About multiple attempts, question overrides, and question edits

The analysis handles some common scenarios in these ways:

  • When students take an assessment multiple times, the last submitted attempt is used as the input for the analysis. As soon as a student submits another attempt, subsequent analyses will include that newest attempt.
  • Gradebook overrides don't impact the analysis data because the analysis generates statistical data for questions based on completed student attempts.
  • When you make changes to a question or manually grade questions, you must run the analysis again to see if the changes affect the data.

Examples

Question analysis can help you improve questions for future assessment administrations. You can also fix misleading or ambiguous questions in a current assessment.

  • In a Multiple Choice question, an equal number of students chose A, B, and C. Examine the answer choices to determine if they're ambiguous, if the question is too difficult, or if the material wasn't covered.
  • A question is recommended for review because it falls into the hard difficulty category. You determine the question is hard, but you keep it to adequately evaluate your course objectives.
  • A Multiple Choice question is flagged for your review. More Top 25% students chose answer B, but A is the correct answer. You realize you didn't choose the correct answer when you created the question. You edit the assessment question and it's automatically regraded.

You may need to download test results for external analysis and evaluation. External analysis is important for supporting course quality and assessment efforts. Furthermore, institutions often want compiled assessment data for accreditation and program review activities.

You can download assessment results from either the gradebook grid or list views. Open the options menu for an assessment and select Download results.

Image 1. Download Assessment Results option from Gradebook grid view

Image 2. Download Assessment Results option from Gradebook list view

When downloading results, the following options are available:

  • file type - Excel spreadsheet (.xls) or Comma Separated Value (.csv); the default is .xls
  • format results by student or by question and student; the default is by student
  • download all attempts or only attempts included in the grade calculation. The instructor defines which attempts to include in the grade calculation in the ‘Grade attempts’ settings. The default is to download only attempts used for calculation.

Image 3. Download Results peek panel

The downloaded report includes the following information:

  • Student name
  • Username
  • Questions
  • Answers
  • Grading status
  • Any content the student may have included with their submission

Image 4. Sample for the result format “Download by student”

Image 5. Sample for the result format “Download by student and question”

When using anonymous grading the downloaded results exclude student and score details until grades are posted for all students.

Watch a video about how to download assessment results

The following narrated video provides a visual and auditory representation of some of the information included on this page. For a detailed description of what is portrayed in the video, open the video on YouTube, navigate to More actions, and select Open transcript.

Video: Download assessment results shows how to download test and assessment results data.