Planet of the TVs (Ep. 2)

InterDigital at CMU
5 min readJun 27, 2021

--

Good to have you back for another saga of The Future of Digital Television. The last time you heard from us, we were finalizing user testing of Prototype 1 and in the midst of designing Prototype 2. Since then, we’ve wrapped up user testing for both prototypes and synthesized our findings to get an understanding of how to iterate on our solutions and increase fidelity as we move forward. We also embraced the opportunity to, as a team, pause and reflect on our progress and reconsider our collective end goals to form a more detailed image of what we expect our final prototype to look like by the end of the summer. And so, like The Backyardigans, we dove head-first…

Into The Thick of It

After lengthy discussions and collaboration, we made several decisions as a team about:

  • what we envisioned our final design solution would look like in a future real-world application,
  • the scope of our final prototype and the experience we planned to design for,
  • and the most important interactions of that experience that we aimed to test with users.

Envisioning a System That Lives Everywhere and Nowhere

It’s no secret that technology is EVERYWHERE. Like we found from our early user research, digital content permeates our lives in one way or another. While no, our many household and personal devices are not necessarily always recording us, they are always listening - ready for a command to be given that would help them serve us better. Based on current market trends that are hinting towards a future of ubiquitous computing and a decrease in consumer concern about data privacy, we expect that our devices will continue to become more involved in our lives for the sake of personalization and ease of access. This led us to define a realistic vision of the Synthetic Shape Shifter (Context-Aware TV) and the mechanics required to make it work.

How It Works

This vision also helped us define the scope of our final prototype and the user experience we wanted to design. Based on what we had learned thus far from our early research and preliminary user testing, we decided to prioritize the personal viewing experience with content that adapts based on a user’s context. The variables of a user’s context include:

  • Emotions/Mood,
  • Attentional Level,
  • Historical Viewing History/Preferences

And the ways in which content would adapt accordingly include:

  • Plot Changes
  • Modality Changes (i.e. visual narrative → audio narrative)
  • Stylistic/Vibe Changes (i.e. colors, background music, setting, etc.)

Testing the Untestable

Now, all of these plans, visions, and ideas sounded great in theory. But putting them all to the test by making them “reality” is where the real challenge began. How could we test an experience that doesn’t exist with technology that we don’t have access to at a fidelity that would put users in the right mental space for us to examine their reactions?

Prototype 2 helped us achieve this “reality”, but only to a certain extent
(Click here to get a refresher on how we designed and tested Prototype 2). Our findings after wrapping up testing showed us that while we are closer to our vision than before, we remain limited by the pre-existing adaptable content that is available to us. These were our findings and accompanying ideas for the next iteration:

Most users are not aware that the content is adapting based on their emotional input.

  • Most of them think the content is adapted seamlessly
  • Some expect the content to be even more funny or surprising based on their emotional responses to prior content

Users have mixed feelings about if they want the content to adapt automatically.

  • Some don’t want the content to adapt automatically because they want content that has more contrast of emotions and don’t want it to be consistently sad or happy. (especially applies for users that have less of a variation in their facial expressions of emotions)
  • Idea: Use affordance in the next prototype

Users could imagine and envision this being used with the TV shows (The Office) they frequently watch.

  • This finding may be biased because we are showing them a comedy
  • Idea: Use a different genre for the next prototype

From here, we used these findings to quickly iterate and move on to Prototype 3, which looks very similar to its predecessor except for a few elements:

  1. We kept the “Choose-Your-Own-Adventure” concept but switched to using Black Mirror: Bandersnatch in order to test a different genre of content and see if this impacts user reactions to its adaptability in any way. Since it is also more of a drama/thriller, this would also allow us to examine more varying user emotions and plot adaptations.
  2. We incorporated an affordance into the prototype that would let users know what emotion has been registered by the system at the moment when the content is adapting accordingly.
  3. Aside from emotional input, we also incorporated a new variable to test: attention level. We did this by sending users text messages that would require a varying low, medium, or high cognitive load on the user’s part to read and respond to it while watching Bandersnatch. While their attention was turned away from the content, we would either pause the content until their attention came back, replay the scene that they may have missed, or switched the content entirely to an audio-based narrative that would describe the scene audibly.

Now We DO Know Where We’re Going…

(to the theme of Into The Thick of It - The Backyardigans)

Now that we’ve established a solid vision for our end prototype and testing goals, the ideas have flowed swiftly and endlessly. We know that pre-existing content, while quick and convenient to utilize for our purposes, comes with its limitations for replicating the real-world experience as closely as possible. So where does that leave us? You’ll have to stay tuned to find out in our next episode, coming soon!

--

--

No responses yet