11/24
Brief
Develop a mixed reality hybrid (people both in person and remote) design studio environment to enhance the studio learning experience; assume that mixed reality (MR) headsets are as ubiquitous as smartphones during this time. Focus on one aspect of the studio experience.
Starting Off
Thinking a decade ahead and designing for a nonexistent technology (as of now) is going to be a mental hurdle throughout the length of this project. I have no idea what will change in a studio experience over the next 10 years, but I can at least try to find struggles that will continue to persist with a hybrid (in person and remote students combined) environment regardless of technology.
Below is a map of issues with key and subissues related to having both hybrid and remote students in a studio setting together, as well as a connection map to show how remote students are currently in the “outer fringes” of connectivity compared to those on campus and working together in an actual studio setting:
Personas
I interviewed some of my friends in CMU who are currently studying remotely in a studio-based major. I found that their productivity was helped or hurt immensely by the environment they worked in: one of my friends mentioned how moving to a different room during the semester was essential to his concentration.
As I’m writing this, I thought about Zoom fatigue and how a day’s worth of online classes can feel like a week. My head feels like an anchor two hours into a studio classes with my headphones weighing me down and my eyes hurt from starting at a screen for as long as is needed. I often just want to take a nap after a long online class even if I got a great night of sleep.
Zoom fatigue may be a huge reason why remote students are so hesitant to connect with their studio-based and even other remote peers. I wonder if I can create an MR experience that enhances hybrid socialization without adding fatigue.
Workspace Documentation
Since I am on campus as of this semester, I have a separate workspace at home and in studio. Each workspaces has unique environmental factors that afford different styles of work.
Studio desk:
- Desk height: easy to get out of chair to socialize or grab tools
- Lighting: great lighting esp. during the daytime
- Screens: only have a laptop, so I don’t do much digital work here
- Cleanliness: generally clean
I save my studio space for working on physical models and using physical prototyping tools that require more desk space.
Home desk:
- Desk height: easy to get out of chair to socialize or grab tools
- Lighting: great lighting esp. during the daytime
- Screens: giant second monitor
- Cleanliness: very messy except for small area around laptop
I work on most of my digital work at home.
Analyzing my two workspaces has made me realize how small differences between spaces trickle down to all aspects of the space (for example, desk height making it hard to get out of a chair) and cause me to start using each space for a different purpose.
Above is how my screen is typically laid out, depending on if I’m using my laptop by itself (left) or with an external monitor connected (right).
Above is a size comparison between my laptop and the external monitor. When I work with both my laptop and a monitor, I can have design work running on the larger screen and have a social app like Discord or Zoom on the laptop screen. However, if I am only working on my laptop (like many of my peers), these social apps are pushed behind work in progress, making it more difficult to work and interact at the same time. Mixed reality has the ability to project extra screens to make this possible, so that is something I am considering at the moment.
Early Prototyping
AR Sketch Models
Our class used Reality Composer on iPads to quickly prototype some MR interactions using AR. Here are a few I came up with based on changing environmental factors and giving remote students a presence:
Interaction 1: Remote students can be represented by avatars with expressions. A student can chat with another through this avatar even if one is not physically present.
Interaction 2: This is just adding extra screens to work with (the colored rectangles).
Interaction 3: Remote students can “sit and work” at empty desks using life sized avatars. Tapping on an avatar can bring up what the student is working on.
Interaction 4: If someone needs to focus on work, they can create a veil around their workspace to visually (and maybe aurally) block out others. This would be seen by not only the user who put up a block but nearby students as well.
Storyboarding
Feedback on storyboard:
- The overall idea seems like a good start, but the interactions are still at a surface level and too vague.
- What is the context (work, social, collaboration, etc)? Narrow down a focus.
- Is the tapping interaction necessary? Wouldn’t someone’s workspace always be visible if they were physically working in studio?
- If a person already has an avatar present in the classroom, would a 2D video call seem out of place?
Self-reflection:
As the prevalence of digital media in our physical environments increases daily, what is the role and/or responsibility of designers in shaping our environments?
Digital media is both a blessing and a curse for going about life and a designer has to seriously reflect on this balance for any and every interaction they design for digital consumption. On one hand, experiences can be augmented digitally in an irreplaceable way (for example, being able to learn, work, and socialize while maintaining safety during a certain global pandemic). However, a reliance on digital experiences “cheapens” the value of tactile or social interactions and further feeds the reliance of digitizing everything to not look outdated.
As an example, in an HCI class I took this semester, we were tasked with designing a digital dashboard for students to give feedback to presenters presenting live in front of a classroom. Instead of the goal to make this dashboard aesthetically pleasing and encouraging a user to focus on it, we were actually trying to design this experience to drive a user AWAY from a screen whenever possible and pay attention to what is happening in the physical world.
Developing a Focus
Due to restriction on technology lending at our school due to the pandemic, I was unable to use the iPads any further, which I used previously to make quick sketch prototypes above. Reality Composer, the AR development app I was using, only worked on iOS devices, and I did not have one to substitute the iPad for.
Since my idea was heavily dependent on having abstracted, life-sized 3D avatars within a real life space, I wasn’t really sure how to tackle that without access to AR creation software. Therefore, I decided to talk with the professors and TAs about better ways to approach making these avatars.
We discussed how the project would be set 10 years in the future, and assume that mixed reality would be very far advanced compared to the examples we have today. Under that assumption, realistically-styled avatars would move and look much more naturally. Additionally, as I found out later on, there were ways to abstract an avatar besides just their form.
We also talked about slightly tweaking the personas that my project was based off of. I created the personas at the beginning with current technological problems in mind, namely the personas having varying access to fast internet and more or less powerful MR headsets based on socioeconomic status. For simplicity’s sake, we decided that this problem would be lessened in the future and to assume the gap between high and low end technology would be less noticeable.
Peter and Daphne (the professors) also suggested a take a more in depth look into Spatial and Gather.town, two telepresence-related platforms that approached avatars and virtual spaces from two ends of the spectrum.
I watched some videos of people using Spatial and played around with Gather.town myself, and here is what I found:
Spatial, developed for virtual- and mixed-reality devices, creates realistic avatars based on 2D pictures of a user’s face, and a user avatar is shown from the head down to the hips, including articulating hands and arms. Spatial substitutes video calls with the realism in its avatars. Gather.town was developed for laptops and desktops with 2D screens and places each person in a 2D, highly abstracted room on screen. When a user “walks” over to another in this space, a video call and their voice gradually fade in to simulate walking up to someone in real life. In this case, Gather.town uses avatars only as a representation of a person in a space and continues using video and voice call as the primary form of person-to-person interaction.
Refined Storyboard
To narrow down the focus so I could storyboard my sketch video, I listed aspects of an in-person, social interaction in studio:
I narrowed my focus down to in-studio clustering — when one or more students go over to another’s desk to look at work or just talk. The picture above on the right was a quick and dirty storyboard of a similar idea to my first storyboard. However, in this case, people interact with the avatar directly in studio. From a remote user’s perspective, an AR status icon on their headset lets them know if in-person classmates are visiting their avatar. In this idea, glowing clusters of dots can also pop up anywhere in a user’s space, letting them know that a cluster is forming somewhere in the classroom.
In talking to Davis, a TA, he suggested for me to emphasize the experience on the side of a remote student more than the in-studio experience, and that it seemed like my idea had a lot to do with augmenting the feeling of a limited space. We also talked about the clusters of dots floating around space being a unique way of expressing space, but that it might be distracting if constantly in a user’s field of view.
Through talking with the TAs and professors, I decided to narrow down the focus of my interaction two solving two issues:
- The spatial disparity between studio and home environments
- Human fatigue when using technology for extended periods.
From there, my final storyboard honed in on one idea: How could you replicate in-studio clustering using the spatial limitations of a remote work environment?
When an in-person student approaches an avatar to talk, the remote student will hear their voice in a certain direction, prompting them to turn their head towards the noise; the in-person student’s avatar would then pop up and start a conversation.
With this interaction, I approached the problem of a remote student’s technology fatigue by making their main user interface an AR display that always overlaid a small icon of the room their avatar was in; the AR element means that it would always be in the top right corner of their view no matter where they looked. This way, they could constantly be aware if there was a cluster somewhere in the room, while the interface that would notify them would be left in their peripheral vision only.
Sketch Video Explorations
While working on a my sketch video, I was torn between two gestures of a feature that I would implement: When a user reacted to the AR icon in their top right view, a 3D minimap of the studio would pop up to let them “travel” to that desk:
With the left side sketch, the minimap would pop up in the air, oriented towards the user. The right side was easier for the user to use and required less head movement to perform the actions they need to do, but would have to be overlaid over things on the desk.
Ultimately, I decided to place the minimap on the desk, since the process of dragging a desk from their air down to a desk was pretty complicated (user would need to look up to find the map, raise their arms to pick a desk, then lower their head and arm to place the desk.
Although my interaction was based on life-sized avatars, I also considered being able to shrink them down to desk size. I mocked up what it would look like to scale some avatars and a desk to various sizes and viewpoints using physical objects at home:
I wasn’t sure how to present the avatars initially, given that lifelike representation of humans often fall into an “uncanny valley” of creepiness. However, I tried a simple overlay and filter in After Effects and was please with the result:
I sketched out certain interactions to visualize what they might look like before filming. In the above picture, I was sketching out a remote student’s point of view interacting with holograms and thought of another way to convey space besides the literal minimap of the studio. With the sketch above, in addition to lifelike avatars you could talk with if they were close enough, the MR headset would automatically place small desks with very simple avatars on flat surfaces in the user’s field of view, giving the impression of space. Ultimately, I did not pursue this idea due to time constraints, but I felt that this was an interesting method that could give the illusion of a larger studio without being too fatiguing to look at constantly.
Final Sketch Video
Above is a sketch video demonstrating the features of the Cluster interaction from both a remote student and an in-studio person’s point of view.
Self-Reflection
How were the skills you developed in the first project similar and/or different from the second project? What is your understanding of the role of an Environments designer?
Unlike the first project, the 2nd project was far from grounded in reality. We were designing for a technology that is in its infancy, but imagining it as a ubiquitous technology a decade from now. Our first project was very much restricted to modern technology in comparison.
In that sense, the hard skills needed to visualize a solution for this project were much more scattered, and given our timeframe, learning and doing mixed-reality development was just not possible. In comparison, designing an exhibit was more straightforward and the steps needed to communicate an idea were definitely more set in stone (ex. floorplans and elevations, making 3D Sketchup models, easier to visualize a storyboard).
However, when we came together to watch everyone’s sketch videos, it was clear that we were able to communicate future technology using existing methods of today. In fact, as I was writing this reflection, I came across a video by Microsoft that visualized a mixed reality experience, about 5 years before Hololens 2 was released. Seeing how they, too, relied on ragtag methods to demonstrate a future technology was surprising to see and made me more proud about the work I made for this project.
An Environments designer must always consider the way a human interacts with other elements, no matter a museum space or socializing with other humans. Environments design is also beyond just pushing out a perfectly polished final model, video, or other visualization. It’s just a tool to communicate an idea or interaction, which is the most important part of design.