Skip to content

SCORM Debugging Tips

Mark Statkus edited this page Feb 6, 2016 · 4 revisions

If you've integrated SCOBot into your custom SCO and are running into issues with data persistence or integrity there are several options for isolating the issue.

Isolate the issue

Decrease all the other chatter to just test a specific feature that appears to be failing.

Is the SCO Terminating early?

Something is causing that to happen which can often include a button click that could relocate the URL. That would trigger a unload event, which in turn would Terminate the session.

Is data not persisting?

All to often the 'cmi.exit' type can get switched to something other than 'suspend'. Step back thru the logs to find out if that is indeed occurring. Generate enough data, and that is something you can present back to the platform support/team to troubleshoot further.

Is data being lost?

Sometimes you can forget to commit the students attempt. Or on a bad network connection it can result in a corrupt post. There have been a number of instances where this has occurred (hotel, airport or other open hotspots). These often take deep tests and network logging to diagnose. It could even be caused server side due to headers, security, cache settings etc ...

Approach the issue on several fronts

  1. Consider creating a QUnit test to automate the SCORM calls that mimic your use case.
  2. Use the browsers console to trigger your own SCORM calls or interact with SCORM Runtime directly.
  3. Place console messages in your code to output the before and after logs.

SCOBot has a debug (true/false) boolean which can be adjusted to increase the logging from whats occurring. This is filled with a lot of information which can saturate the console with logs.

Technical Support

Once you have definitive proof your content is not the issue, you can commonly get with technical staff on Learning Management Server. Keep in mind SCORM has been around since 2001, and AICC even before that. A large number of content mainly supports the "I was here" approach. This is commonly only made up of setting some status, maybe a score and exiting. As you get into more rich interactivity and data tracking you can commonly start stressing portions of the LMS Runtime implementation. Results vary since there are so many platforms and each has distinct focus on competitive features that commonly (and unfortunately) are not the SCORM Runtime API. This is also why I've seen some use SCORM as a marketing tactic, and others that believe its purely a buzz word. It doesn't take much to bust a LMS at its own game. Just use the QUnit tests present in this project to automate a student session and see how the LMS handles it.

Tip

The SCOverseer bookmarklet is a great way to see the student attempt values. It will actively seek out SCORM 1.2, SCORM 2004 or AICC to obtain several popular namespace values from the student attempt or CMI Object. It renders these in an HTML display for your review. This can often be far easier than trolling through console logs, or diagnostic reports. You can obtain free use to this bookmarklet at https://cybercussion.com/bookmarklets/SCORM