You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current implementation would swallow all system memory if a massive dataset (many GBs) is used, data should be read in batches (https://www.tensorflow.org/programmers_guide/reading_data) this is not likely to happen soon as datasets generated by moodle will hardly reach 10MB but it is still something we should fix.
The only problem I can think of is models evaluation, because we need to shuffle the dataset to evaluate the moodle model using different combinations of training and test data. We can use a subset (limited to X MBs) of the evaluation dataset instead of shuffling all big dataset.
The text was updated successfully, but these errors were encountered:
Copied from dmonllao/moodleinspire-python-backend#1 before this gets lost:
Current implementation would swallow all system memory if a massive dataset (many GBs) is used, data should be read in batches (https://www.tensorflow.org/programmers_guide/reading_data) this is not likely to happen soon as datasets generated by moodle will hardly reach 10MB but it is still something we should fix.
The only problem I can think of is models evaluation, because we need to shuffle the dataset to evaluate the moodle model using different combinations of training and test data. We can use a subset (limited to X MBs) of the evaluation dataset instead of shuffling all big dataset.
The text was updated successfully, but these errors were encountered: