Evolution of the TVs (Ep. 3)

InterDigital at CMU
5 min readJul 10, 2021

Hi everyone, welcome back to another episode of the Future of TV! Summertime flies by quickly. We only have three more weeks left to create a final prototype of what we envisioned to be our future TV experience. From the last couple of weeks’ continuous testing and iterating, we established a solid vision for what our end prototype will look like, so we are very excited to tell you about our plan!

Since we talked to you last time, we have already finished our first round of prototype 3 testings using BlackMirror: Bandersnatch. We tested adapting content based on the user’s emotional input and attention level, as well as giving users affordance to remind them when their emotional inputs were registered. Our findings showed us that users have a much more clear preference for the way they want the content to adapt according to their various attention levels, but are less certain about whether they want automated content adaptation based on their emotional input. These were the findings that later also led us to extend our prototype 3 testings:

Attention Level:

  • Users prefer to pause the content while they are performing tasks that require mid to high level of attention.
  • Users prefer to have the video content switch to audio narratives when they perform tasks that require a low level of attention.

During each testing session, we sent out three different text messages that required different levels of cognitive load from users (click here to get a refresher on how we designed and tested prototype 3). While users are replying to messages, we would either pause the content, loop the scene that they may have missed, or change the content to an audio-based narrative.

Users generally respond positively to having the audio-based narrative when they are doing tasks that need less attention like checking to see if tea or food is ready, and prefer to pause the content when they are performing tasks that require higher level of attention like going to the bathroom or making tea.

Emotions/Mood:

  • Users continue to have mixed feelings about if they want the content to adapt automatically based on their emotional input.

We continue to get the same feedback as what we did from the last sprint. Some users prefer the automated adaptation because of convenience, however at the same time they also expressed concerns with data privacy.

Idea: further testing the level of control users prefer by doing an AB testing and extending prototype 3 testing

Affordance:

  • Users don’t seem to have a lot of feedback regarding the emoji affordance that we give during the session.

Idea: incorporating more follow-up questions to gauge users’ perceptions and attitudes about the affordances.

From here, we quickly iterated our design based on the findings and decided to extend our prototype 3 testings by doing AB testing to further gauge how involved users want to be in terms of manipulating the adaptations.

With everything else in the test being the same, we want to use this AB testing to see if users would like a level of control over their experiences and if so what level they would prefer.

Our AB testing plan

Group A users will have full control over the content adaptation by choosing the type of desired experiences before the session starts and logging in their emotional input periodically during the session. While group B has no control over the content adaptation. They will answer a pre-study questionnaire that helps the moderator determine which the main plotline to choose. In addition, the moderator will also monitor and observe users’ facial expressions and adapt the content based on their assumptions.

Aside from these modifications, we have also created a post-study questionnaire that we will send to all of our participants two weeks after their session in order to quantify users’ engagement level with the content that we showed to them.

In the meantime…

In order to push progress and make sure we have a final prototype (Prototype 4) that all of us are proud of, we are working in tandem to develop our final prototype as well. As we mentioned to you last sprint, we realized although each sprint made us closer to our final vision, we are still limited by the pre-existing adaptable content. So… we decided to… create our own adaptable content!

The Remake of Little Red Riding Hood

We decided to create our own shows by utilizing a classic fairytale narrative, Little Red Riding Hood. The narrative that we all know would be the base storyline. Using the base storyline, we created three other versions of the story- the happy, scary, and tragic versions (spoiler alert: one involves the wolf befriends grandma and proposes marriage to little red riding hood). All four versions of stories have the same beginning, but we made sure certain points of the story can shift from one to another, so it would be easy for us to adapt the content in real-time following users’ emotional responses.

As for the tool, we landed on using Unity as it gives us more control over the scene we want to use, manipulating plot, weather changes, and stylistic changes like the color and background music.

Based on what we find to be the most effective method from the new round of prototype 3 testings, we will be iterating our final prototype and continue incorporating the attention level aspect and emotional input. We will get back to you with our findings and progress in the next post!

Signing off,

Team Patent Pending

--

--