AR Velocity Measurement on Science Journal
2018 - 2019
In Summer 2018, the AR Boost team conducted a sprint to explore different possibilities for incorporating Augmented Reality into the Science Journal app. This opened up many interesting possibilities, including the ability to measure changes in height over time and the ability to measure the volume of objects. However, the most appealing use case explored was the ability to track motion and measure different phenomena such as speed and acceleration. These measures are especially appealing to science teachers who have to teach velocity– a core concept in most middle and high school physics– but who do not have a good, cheap way to measure it in the physical world.
I'm working as the UI/UX designer to explore this new feature.
Project UX lead
Tech & Tool
Sketch, Invision Studio, AE
Students need to understand velocity in order to succeed in their science classes.
Plus, students need to have fun and be engaged in order to stay motivated to succeed in their science classes.
Teachers need to teach the concept of velocity in order to meet curricular standards (e.g. Forces and Motion, NGSS)
Velocity can be unintuitive to students who are not accustomed to thinking in terms of ratios
Measuring velocity without sensors is slow and inaccurate; for example, using a ruler and stopwatches requires extensive setup and multiple people.
Sensors for measuring velocity can be very expensive; for example, the Vernier Photogate requires two $49 devices to conduct a single velocity experiment.
3D object tracking is not an option yet in AR libraries, so users can't start automatically tracking any object in the environment.
Utilize the standard AR libraries on Android (ARCore) and iOS (ARKit) to measure and graph velocity of an object using calculations based on frame differences. Aim to provide a free, easy solution for measuring velocity in science classrooms, as well as a reliable way to document and analyze these measurements.
2D Marker for 3D Tracking
Enable users to track a 2D target through 3D space and keep track of its changes in velocity. Also allow users record snapshots and graphs equivalent to the existing sensor measurements in Science Journal. One of our goals is try to attain a sufficiently high measurement accuracy for educators to be able to trust the tool and spread the word to other classrooms.
Boost the Interest
Using this new feature to trigger curiosity in students about augmented reality and the capabilities of computer vision and machine learning.
AR Boundary Study & Principle Establishment
Accuracy - it's possible that AR measurement of velocity may be insufficient in its accuracy to meet classroom needs.
Usability - it's possible that AR detection might be accurate at short distances and controlled contexts, but would not work well in actual classroom environments.
Device limitations - performance may be significantly worse or simply unavailable on lower-end devices, which restricts usage.
Classroom limitations - many classrooms don't allow phones, so performance needs to be measured on tablets and Chromebooks.
Integration complexity - ARCore and ARKit are new libraries for Science Journal and fairly new altogether, so there may be unexpected complications in using these APIs.
Tracker Image & Metrics (ARCore + Vuforia)
I've been exploring various tracker image's possibilities, using both Vuforia developer portal and Android ARcore SDK to rate the images and found out the image tracker works good for both of two platforms.
*Intended Blur on Confidential Materials
Various tracker metrics for the velocity tracking (at what point does it stop tracking? etc.) were tested
Some points of note:
An image needs to take up at least 20% of the screen to begin tracking
An image needs to take up at least 5% of the screen to stay tracking
The entire image needs to be in the screen to begin tracking
ARCore needs to estimate the image size to begin tracking. So if a user is stuck on “Detecting Image,” which means the image has been seen but ARCore does not yet know its size, moving the phone around so the camera can see the surrounding environment, and moving the phone away from and back towards the tracker, so ARCore can estimate depth, is helpful.
At least 50% of the image needs to be in the screen to stay tracking
If the target is moving faster than 5 m/s, or 11mph, ARCore will stop tracking
3D Object Tracking - This involves tracking an actual object through space rather than tracking a 2D marker. This is a feature being worked on for the next ARCore release in 2020. However, given that it's not currently available in the library, it's not feasible for us to implement until then.
Custom Trackers - We considered allowing users to generate their own trackers of their choosing. However, this would have required implementing a tool that determines tracker efficacy, and may have required classrooms to go through many iterations of designing a tracker until they find a suitable one. It would be much easier to just provide a tracker that we know works.
Remember the Constrains
Intuitive & Natural Steps
Since our primary use case would be the user holding the mobile device with at least one hand to finish the observation/tracking/recording actions, the small screen have really limited visual affordability. Our design should highly consider the layout's usability and try to bring in proper visual affordance.
Augmented reality function does note belong to the traditional interaction patterns, it's rather new to our users. The design should consider the interaction flows carefully to enable our users learn understand the pattern quickly and rightly.
Considering the Next Steps
Keep the Consistency
The velocity measurement would be our first attempt of trying to combine AR with our note taking function. We have a bunch of other AR related ideas that could be added into the app in future development. The design should be flexible enough to hug our upcoming new features.
Though creating a UX pattern for mixed reality feature would be a rather unique and new practice, we should keep in mind that keep the product's overall consistency is rather important: the UI and interactions should be as harmonious as possible, do avoid making new part act like a breaking-in weirdo.
*Simplified Details on Confidential Materials
I've been working on rounds of design explorations and reviews. Having more options allow us to think about and discuss a variety of possibilities, the final direction may be a hybrid based on these discussions.
Some questions we keep asking ourselves:
How much info does the graph need to show?
Long sparkline vs graph chip ?
Should we plan for more actions or go with an option ideal for now and then change interface when actually implementing future tools?
Option 1 - Inserted FAB
Fab button is in the action area, not really accepted use of FAB (could change styling, if this direction is preferred)
Live graph is much smaller
Leaves no room for future actions (such as flag, etc)
Option 2 - Compact Action Area
FAB as intended, but takes up more space
Spark line of live graph allows for more space
Leaves no room for future actions
Option 3 - Recording Drawer
Takes ¼ of screen space
Slightly different from action area elsewhere
Allows for multiple actions while seeing graph similar to sensor cards
Could possibly allow user to expand for full graph
Option 4 - Action Area W/Recording Graph
Graph consistently at the bottom with action buttons above
Visually allows for more to show from camera while still allowing for multiple actions/room to grow
Option 5 - Data Switch
Maximum use of screen for the camera
Allows you to toggle for the data, graph would appear from the bottom
Option 6 - Data FAB
Works well with action area designs in the rest of the app
Assumes that “adding a flag” would be a feature -- but until then may look off balance with only 2 actions
Small graph chip, but could possibly allow tapping for expansion?
We've been having rounds of discussions to talk about different options' pros and cons, then establish the treatment idea with the following ideas:
We decided to not let the recording graph take too much screen space in order to enable users focus on the physical moving actions. So the data part would be a small graph chip, but could possibly allow tapping for expansion in later implementations
Keep the bottom action area's consistency, both for the feature and the visual per se
Keep the data chip at the bottom to maintain the actions in the touchable area when the device is holding by one hand (Which would be the most common use case)
Use the tinted record/stop button in the bottom stock to give users clue about this primary action