Skip to content

Journal

Mary Hoekstra edited this page Dec 17, 2017 · 32 revisions

November 8th

  • hand-drew storyboard detailing relationships between views
  • created fresh project and repo
  • added basic UI elements to Home View Controller (VC)
  • pressing "Take Test" brings user to pre-test survey
  • experiencing some issues with RK formatting (see Issue #1)
  • attempted to build memory test using RK (see Issue #2)
  • attempted to add another Task VC for memory test (see Issue #3)

November 11th

  • created Core Data Model with entities
  • pressing "Log Commute" brings user to New Commute VC
  • used MoonRunner tutorial to start fleshing out New Commute VC (starting, stopping, etc.)
  • created segue between New Commute VC to Commute Details VC

November 14th

  • fixed issue with start and stop button
  • integrating CoreLocation component to collect user's gps coordinates
  • looking into the modifying ORKQuestionStep to display an image as a question

November 15th

  • fixed layout issues by adding constraints (enter 5 or 10 in left and right edge boxes so label fills screen)
  • finished implementing CoreLocation functionality with help of MoonRunner tutorial
  • fixed segue...
  • some issues with trying to adapt code from tutorials where they provided mysterious starter code

TODO:

  • figure out Navigation Controllers
  • resolve RK issues
  • incorporate introduction to app, consent step, and demographics survey

November 16th

  • added MapView to Commute Details, but route won't display (Issue #4)

TODO:

  • figure out Navigation Controllers (if even necessary)
  • resolve RK issues (use value picker for answers with many questions?)
  • incorporate introduction to app, consent step, and demographics survey

November 20th

  • attempting to incorporate images into ORKQuestionStep... image still not being seen as a property. Will wait for apple's reply on GitHub issue.
  • may use another user's suggestion to present each page as a form with an image answer format and scale answer format, but will need to figure out how to make the image un-selectable
  • added ConsentDocument and ConsentTask

November 21st

  • changes in RK files aren't rendering (tried making labels hidden, changing label colour, etc.)
  • added an onboarding view controller that is only dismissed when user consents. Must figure out way to present it when user first opens app.

November 22nd

  • made "UserConsented" a User Default that is set to true when the user completes the consent task. If this value is false when the user opens their app, the onboarding view controller is presented. If true, the home view controller is presented.

November 25th

  • removed all traces of cocoapods, downloaded RK, and added it manually into project
  • changes in RK files now rendering
  • made _placeholderLabel hidden in resetLabelText function for memory test
  • can now present the home screen after the user gives consent (unfavourable delay, but it works for now)

November 26th

  • added transitions from consent task to demographics survey and from pre-test report to memory test
  • commute details screen is followed by navigation report
  • moved location request to new commute VC instead of when app opens (more user-friendly)

TODO:

  • give user notification when location services disabled
  • handle results from surveys
  • if a commute has been recorded, create memory test and notify user -> take subset of gps coordinates and present street view images from these coordinates in test

November 27th

  • trying to convert GMSPanoramaView to UIImage so it can be presented in the memory test
  • got help to do the conversion on StackOverflow, but street view image is just a grey loading screen
  • tried presenting GMSPanoramaView in a smaller subview instead and just got black loading screen
  • might need to try something else? maybe let view be full screen and overlay buttons on top?

November 28th

  • still no luck with conversion of GMSPanoramaView to UIImage...

November 29th

  • given an array of UIImages and arrays of string identifiers, the RK memory test can present the images programatically
  • got GMSPanoramaViewDelegate working and panoramaDidMove is called, but still getting a grey loading screen as the image

November 30th

  • got Google Street View image to appear in a smaller view by adding the pano view as a subview
  • should I hack RK QuestionStep to display a view?

December 1st

  • can capture GMSPanoramaView as an image by adding it as a subview and then taking a snapshot of that subview after 2-3 seconds of delay so that the view has time to load
  • will need to convert all pano views to images in a "loading screen" before the user takes the test
  • can cover up the subview with another view and activity indicator

TODO:

  • given several coordinates, loop through to generate panoviews, convert to images, and save images in array

December 3rd

  • can now loop through coordinates, presenting a panoview for each coordinate and taking a snapshot
  • images are added to array
  • home view controller now grabs user's most recent commute. If there is one, it navigates to the pre-test survey. If not, an alert pops up.
  • most recent run is fetched in memory test view controller and location objects are extracted for their coordinates (not a fan of having to fetch the last commute twice... but not sure how to pass it from home vc to memory test view controller?)
  • disabled "continue" button on loading page (memory test VC) until all images have been added to array
  • made a "Test" struct with an images property. This way, the images can be added in memory test VC and used in the memory test task. This will only work for storing one test at a time however and will not persist pass this use of the app...

TODO:

  • look for any un-tested commutes when creating the test, rather than just the most recent
  • possibly store tests as persistent entities in CoreData, rather than in a global struct
  • fix up the coordinates so the street view images are facing the right way
  • choose a random subset of the coordinates to present (right now we're using all of them)
  • add appropriate content in the consent task and surveys, and fix layout if necessary
  • clean up the UI
  • HANDLE RESULTS!!!

December 9th

  • got geocoding to work for some cases (views that faced the road are now facing the street)
  • if geocoding returns an error, heading of 0 is returned and image is displayed as is

TODO:

  • need to generate identifiers for every image used in the test

December 10th

  • added image identifiers to the Test struct, where each identifier is the coordinate of the corresponding image
  • step and response identifiers are dynamically generated during the test to be "step0", "step1", etc.

Questions: How will we randomly choose which subset of images are used for the test? Do we want a set number of images for each test? Some commutes may only have 10 coordinates collected, while others may have 100... How will we generate lure images? Will the lures be based off of locations they've been, or will the same lures be used for every user? One lure per real image?

  • can generate random coordinates within a specified radius of a given coordinate (500m seems good)

  • will use these coordinates to generate lures .... not perfect. ended up inside an opera theatre one time.

  • should probably enforce that a certain number of coordinates are collected on a commute (10? 15?)

  • if less than 10 coordinates are collected in a commute, error is displayed and commute is not saved

  • if commute is longer than 10 coordinates, take random subset (shuffle and take prefix)

TODO:

  • handle results
  • test wirelessly?

December 11th

  • testing app on phone (connected to computer) -> scrolling works! but is inconvenient for the memory test. Should try to fix formatting. -> time stops when screen goes black...... -> can't actually record a commute because phone isn't moving! -> images still not being captured properly at times because of the heading changes. Need to increase delay.

  • trying to find a way to add a time limit to questions in RK -> apparently can find out how long user took to answer each step, but that's it so far

  • trying to make UI buttons consistent with blue, rounded outline. May need to create a custom button class that mimics RK buttons, as just using ORKBorderedButton doesn't seem to work.

TODO:

  • need to add an "About" page to describe purpose of study

  • need to finalize app name, logo, and UI scheme

  • need to finalize content for consent step

  • need to handle results from surveys and memory test - store in core data, or send away? stretch goals:

  • get map working on commute details page

  • added About page

  • Chelsea working on logo

  • adding another form item to the memory test that lets the user rate the quality of the image. Options will be "Image didn't load", "Image not of building", and "Clear image of building" or something along those lines.

  • trying to handle the results from the memory test

  • if the user said that 2 or more images didn't load properly, want to increase the delay between presenting the view and taking the snapshot (will need to make delay a user default)

TODO:

  • finish handling results
  • adjust lure generation method to ensure that none of the random coordinates are actually on the user's route
  • need to fix up map and add it back to commute details page
  • add constraints such that pages render properly on different phone sizes

December 16th

  • Commute details now shows map with route!

December 17th

  • made BorderedButton class that mimics RK buttons (blue, round borders, width = 130, height = 36)
  • attempting to email results to myself so I can test a route on my phone and see the results