It was only a matter of time before the raw computing power of our laptops, phones and eventually tablets broke free of their multi-purpose computing paradigm and bestowed upon the other electronics in our personal universe the kind of automated intellect that was previously reserved for sending people to the moon.
What began with the creeping pace of SmartTV development is now rolling full-steam ahead with refrigerators, thermostats, and door locks — all enjoying the ability provide a digital experience to the end user. We are at the very beginning of witnessing how this new digital universe will connect and designing media consumption experiences for this new world requires questioning of some fundamentals.
Our TVs are transitioning from pure media consumption devices to windows into the interactive digital world. More often than not though, we are expected to interact with these windows (man, Microsoft really did nail it with their OS name) through an interface created by a gentleman named Eugene Polley in 1955. The remote we all love, as well as the concept of “point this thing at the piece of furniture to make it go,” has been around for nearly 60 years.
The current crop of the smartest TVs still expect you to interact this way.
The screens of our TVs, much like graphical OS “desktops,” have become more detailed and dense. But why are we trying to interact with this large piece of furniture the same way as we do with an appliance designed for manipulating and navigating complex interfaces (I’m talking about PCs here)?
Is scrolling through endless menus in an effort to land at media you care about the ‘right way’? What about typing things into a giant text box across your living room from your couch?
We’ve seen keyboards, remotes with keyboards, remotes with touchpads … all in the name of replicating the desktop experience on the TV.
I believe that the TV is the main battle ground in achieving the concept of continuous client, which was detailed by Joshua Topolsky in this article a few years back. The basic idea is that our digital touch-points should be context aware.
Right now you log into an experience on your TV the same way you do on your computer or smartphone. The application on the device requests access to information on a server, and based on your credentials, it serves up information and/or media.
For instance, if I log into Netflix, HBOGo or Hulu on my laptop, tablet and TV at the same time, they will try to show me the exact same information — as opposed to utilizing all available screen real estate to display the most relevant information for that screen size, given that other input methods closer to the user are in the same room.
Media applications do not have awareness. Awareness is tough. Netflix doesn’t know that you are on your couch with a laptop, the TV is on, and the iPad is laying on the floor next to the remote — and there is a bag of chips near you.
The closest continuous experiences we have today are systems that are based on intent — the most basic being Apple’s Air-Play and the most advanced being Google’s Chromecast. Casting (recording a video of a screen of an app, visible to you or not, and sending it to your TV as a video) provides a novel first step.
Casting solves the awareness issue by using a single “brain” to power what you see on your devices. When I’m casting a video from YouTube’s iPad app from my couch to my TV, I can also browse for recommended videos and share to my networks as desired. The YouTube app on my ipad knows that I’m casting to the TV and browsing at the same time.
So far, Chromecast has been great for YouTube/Netflix videos and basic web presenting; but, as this project grows, we will see more “single brain” applications creating immersive experiences that utilize screen real-estate effectively.
This requires not only developments in the display SDK for Google and other market entrants, but also in how the intent to view is understood by the system. Clicking the casting button is easy, but the phone knows it’s near your TV. Higher-end devices know what orientation they are at … “flicking” a piece of media onto another screen based on the location and orientation of the screen is just within reach!
The floundering of consumer device manufacturing firms, and the slow-moving platform approach of software giants leaves a myriad of opportunities for media brands to carve their niche out.
Unfortunately, the very same floundering creates an environment that’s difficult to activate against. The practical way to trigger media playback from an iPhone on a Samsung TV is to open an app on the phone and the TV, instruct the phone to play a piece of media on the TV, and wait for the command to go to the server infrastructure, making its way to the TV app, which then requests the stream. The TV app doesn’t actually provide me visible value … it just beckons me to use the remote instead of my phone.
Not trivial, but doable. There are some great vendors in this space that can help you build a next generation continuous experience. If you want examples of this concept working well, take a look at Microsoft’s XBOX SmartGlass, or if you are more technically inclined, take a peek at Accedo’s Connect product to create a foundation for your own solution.
This is an incredibly exciting time in the media distribution world. Instead of replicating functionality across new devices, we are beginning to merge our digital touch-points creating opportunities to deliver smart and beautiful experiences.
Folks in media companies building multi-device experiences: give me a shout,I’d love to help you define and build a next generation experience.
Folks in consumer electronics firms in charge of software ecosystems: I would love to chat about creating an environment where your media partners can create truly exceptional experiences without having to resort to complicated server-side workarounds that depend on lots of intra-infrastructure communication to do what should be an elegant system.
Lets make this TV thing better, eh?
Say hello and if you enjoyed this article please share it!
This article originally appeared on Medium.