Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add score in practice quiz #12564

Open
wants to merge 7 commits into
base: develop
Choose a base branch
from

Conversation

thesujai
Copy link
Contributor

@thesujai thesujai commented Aug 15, 2024

Summary

  1. Update the ContentNodeProgressViewset to return num_question_answered, num_question_answered_correctly, and total_questions
  2. New map for the only quizprogress separated from other content progress
  3. ProgressBar edited to deal with practice quizzes separately
  4. New corestring label for questions left

Screenshot
Screenshot from 2024-09-05 14-29-14

References

Fixes #8643

Reviewer guidance

Please suggest if the reactivity needs to be improved


Testing checklist

  • Contributor has fully tested the PR manually
  • If there are any front-end changes, before/after screenshots are included
  • Critical user journeys are covered by Gherkin stories
  • Critical and brittle code paths are covered by unit tests

PR process

  • PR has the correct target branch and milestone
  • PR has 'needs review' or 'work-in-progress' label
  • If PR is ready for review, a reviewer has been added. (Don't use 'Assignees')
  • If this is an important user-facing change, PR or related issue has a 'changelog' label
  • If this includes an internal dependency change, a link to the diff is provided

Reviewer checklist

  • PR is fully functional
  • PR has been tested for accessibility regressions
  • External dependency files were updated if necessary (yarn and pip)
  • Documentation is updated
  • Contributor is in AUTHORS.md

@github-actions github-actions bot added the DEV: backend Python, databases, networking, filesystem... label Aug 15, 2024
@MisRob
Copy link
Member

MisRob commented Aug 19, 2024

Hey @thesujai, thanks! Regarding frontend parts,

How do I calculate the score and no of question in the frontend

Was this a question for us? If so, we have some score calculations here

https://github.com/learningequality/kolibri/blob/develop/kolibri/plugins/learn/assets/src/views/cards/QuizCard/index.vue#L59-L87

That said, I'm not entirely sure what cards exactly you will need work on. Please send me a link to the card component that's used for a practice quiz.

@MisRob
Copy link
Member

MisRob commented Aug 19, 2024

Also there's some guidance in comments from @rtibbles in the issue's comments #8643, and some touch on frontend parts as well.

@MisRob MisRob requested a review from rtibbles August 19, 2024 13:55
@thesujai
Copy link
Contributor Author

HI @MisRob
The issue was that we could keep attempting a practice quiz even after all questions were complete(Not a bug). But this means that a new attempt_log will be created for every new attempt. So a point can be reached where num_correct_attempts>num_questions. We cannot calculate score in this case.
I discussed this with @nucleogenesis and he suggested fetching all of the attemptlogs which are correct, but unique by AttemptLog.item. So I am working on that approach

@MisRob
Copy link
Member

MisRob commented Aug 20, 2024

Okay, it sounds you've got all you need then

@thesujai
Copy link
Contributor Author

Concern:
When a quiz has more questions than the minimum required to complete it (e.g., a quiz has 20 questions, but only 6 need to be correct to mark it as complete), the progress shows as 1 (100%) after answering the required number of questions. However, if the score is calculated using the total number of questions (num_correct_attempts/num_questions), it may appear low even when the quiz is marked as complete, which could be confusing.

One approach could be to calculate num_questions and num_correct_attempts based on the criteria for completing the quiz. For instance, if a quiz is marked complete after 6 correct answers, then:

  • num_questions would be set to 6 (the number required for completion).
  • num_correct_attempts would be calculated up to that point, ensuring the score reflects only those necessary attempts.

This way, the score and progress would be more aligned, avoiding confusion when the progress is 1 but the score seems low.

@MisRob
Copy link
Member

MisRob commented Aug 21, 2024

@thesujai I am looping in @nucleogenesis here, since you mentioned you've already had some discussions regarding this together

Copy link
Member

@nucleogenesis nucleogenesis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thesujai the query code looks good, but I'm honestly not entirely sure how best to display this information to the user.

So I think that what we need is a design decision (@jtamiace @rtibbles ) w/ regard to what we actually want to express to the user here.

We have three things:

  • Total number of questions in the Practice quiz total_questions
  • Total time the user has attempted any question at all (correct or incorrect) num_attempts
  • The total number of questions the user has answered correctly num_correct_attempts (not the number of times questions are answered correctly, but the number of questions which have ever been answered correctly)

Example (not super realistic, but for demonstration/ensuring I'm understanding):

  • Practice quiz with 5 questions total_questions = 5
  • User answers all of those questions incorrectly 3 times each num_attempts = 15
  • User answers the same question correctly 5 times num_correct_attempts=1; num_attempts=20;
  • User answers the rest of the questions correctly in a row num_correct_attempts=5; num_attempts=24

So with these values... what do we want to tell the user? Are these the values we expect to use here or have I been misleading @thesujai in our discussions 😅

@nucleogenesis
Copy link
Member

@thesujai after a quick chat w/ Richard I think I see the misunderstanding on my end.

A "Practice Quiz" will not allow multiple attempts at the same question within the same "Quiz Attempt" -- that is to say, when you take a "Practice Quiz" it is treated like a "Quiz" in that you're going straight through it.

Which means:

  • Mastery model can be ignored
  • % is a calculation of num_correct_attempts / total_questions because the user is expected to have attempted all of the questions exactly one time

So if you only get the data for the last "AttemptLog" for the practice quiz, then you should have the correct data there.

I'll give the code another look through w/ this in mind

@github-actions github-actions bot added APP: Learn Re: Learn App (content, quizzes, lessons, etc.) DEV: frontend labels Sep 5, 2024
@thesujai thesujai marked this pull request as ready for review September 5, 2024 09:25
@thesujai
Copy link
Contributor Author

thesujai commented Sep 5, 2024

I understand why the linting failed here, but not why locally it passed

@nucleogenesis nucleogenesis self-assigned this Sep 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
APP: Learn Re: Learn App (content, quizzes, lessons, etc.) DEV: backend Python, databases, networking, filesystem... DEV: frontend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update score on content card for practice quizzes
3 participants