DeepStream

MIT Media Lab
MIT MEDIA LAB
Published in
11 min readMay 12, 2016

Participants in major breaking news events are increasingly broadcasting live video on the web to provide a source of on-the-ground information to the world. Recently, a livestream from protests in Iceland due to revelations in the Panama Papers was viewed over 200,000 times–equal to about two-thirds of the country’s population. Links to the livestream were shared on social media and through sites like Reddit, garnering thousands of comments from viewers about an issue that had only surfaced 24 hours earlier. Two days later, Iceland’s prime minister had resigned.

But livestreaming in the abstract is a surprisingly blurry and nebulous concept. On one level it is simply the act of transmitting live video over the internet. But on another level, the various topics of the videos, tools used for broadcasting, human motivations behind the broadcasts, and reasons viewers watch are richly diverse and hard to generalize. The various ways that different kinds of livestreams relate to more traditional forms of media also make it difficult to categorize them. A fixed-position traffic cam is only the same as an international eSports competition on the most superficial levels, yet we refer to both of these things with the term “livestream.”

Photo Credit: jayRaz via Compfight cc

As a technological feat, multicasting live video over the public internet dates back to the Mbone (or multicast backbone), which was launched in 1992 and was the first protocol that enabled a many-to-many video broadcast over a data connection. It was tested at scale in 1994, when a Rolling Stones concert was the first major Internet multicast. RealNetworks launched RealVideo in 1997, which featured proprietary video compression codecs and a graphical user interface. In conjunction with increasingly available bandwidth, this would make viewing livestreams possible for many more people, and lowered the financial and technological barriers to broadcasting as well.

Fast-forward 20 years and we’re now experiencing an explosion of livestreaming platforms, enabled by the ubiquity of smartphones with integrated video cameras and increases in mobile data bandwidth. Meerkat and Periscope are two of the more familiar names, but there are plenty of choices, from more established platforms like Ustream, to Internet giants like YouTube and Facebook’s new app, to smaller niche-market offerings like Bambuser. But technological capability alone does not explain why the medium is increasingly popular.

Part of my research at the Center for Civic Media has been to try to understand why people watch livestreams. What do they get out of this medium? I’ve interviewed viewers and people who broadcast, focusing mostly on live video from events that we broadly think of as “newsworthy.” One theme that emerged from these interviews was that viewers often feel an emotional connection with the event or broadcaster they are watching. It can be a very powerful experience to view the world through someone else’s eyes in such a live and unscripted way. Viewers also often feel that livestreams show us events on the ground that are otherwise difficult to see.

Another finding was that livestreams from newsworthy events tend to be perceived as authentic, in contrast to highly produced television news. Part of the “crisis in journalism” today is related to a loss of audience trust because of the perceived inauthenticity of the journalistic stance of objectivity. Livestreams from newsworthy events are inherently from a point of view, and there is no pretense to, or expectation of, objectivity. This sense of authenticity might also be related to the generally lower quality of the video due to the mobile camera or the data connection.

Screenshot from the Decorah Eagle Cam via Magic School House.

These aspects of the viewing experience can translate to benefits for broadcasters. It’s a medium that may work particularly well for activism, breaking news, and other events where on-the-ground first-hand experience and information is valued. This might include concerts and festivals, travel, conferences, and of course the many wildlife cams out there.

For all of these benefits, livestreaming does have some drawbacks. My research suggests that people often feel there is a lack of context with livestreams. It can be hard for people to figure out what is being shown and why it’s important. Even if viewers know what they are watching and care about it, they often open new browser tabs to search for additional information about the subject, thus leaving the video experience.

Another challenge is the difficulty in finding livestreams on topics of interest. If I think someone might be broadcasting from a Black Lives Matter protest, I may have to search on at least a half-dozen platforms to try to find the video. I would argue that this extreme platform fragmentation isn’t helping the medium because it causes people to approach livestreams based either on what they see in their social media feeds or serendipitously, through random encounters when they have the time and inclination to browse a platform. That might work if livestreaming is really just a social-media tool, but as an information-gathering tool this is a real difficulty.

The primary goal is to address the problem of lack of context by introducing a new role into the livestreaming ecosystem: that of the curator.

Our answer to a few of these problems is a platform called DeepStream (public beta: www.deepstream.tv). It is a provocation of a sort that explores what might happen if it were very easy to remix one or more livestreams by adding context and background using embedded media. The primary goal is to address the problem of lack of context by introducing a new role into the livestreaming ecosystem: that of the curator. This role has of course been around for several centuries, as various people and institutions have taken on the job of deciding what media is most important and explaining why. Today journalists and news organizations most often fill that role. But curating livestreams is hard work. If you don’t code yourself you’ll need a web developer to embed the video next to an explanation of why it’s important. You’ll spend vast amounts of time using the sometimes basic search functionality on six to ten different livestreaming platforms. And you’ll end up sending people to a lot of streams that are no longer live, creating frustration on the part of viewers.

We’ve built a platform that makes this process easier. When you find a livestream that you think is important, DeepStream lets you very easily embed that live video into a player that includes context cards. These cards can be set up easily to feature images, videos, maps, tweets, or even free text to help you explain what the livestream is about and why it is important. You can group multiple livestreams together to provide different perspectives on an event, and we check these streams every 90 seconds to make sure they are still live. You can also narrate your DeepStream using your webcam, creating a picture-in-picture experience.

We are particularly interested in whether collaboration and community building can happen on this platform. We included a feature that lets a curator invite other people so they can work on curation together. And we let viewers suggest context cards to the curators, which can be approved or denied, so that any viewer can participate in the activity of building deep contextual background on a subject, potentially creating a richer narrative to explain what is happening.

As we started addressing the problem of context in live video we realized we also needed to address the secondary problem of discoverability. It’s hard to ask people to curate live videos when they are so hard to find. So we started building a search engine that actually searches across many different platforms at once. This significantly reducing the time and effort to locate and preview livestreams of interest.

One significant way in which DeepStream differs from other livestreaming services is that we are platform-agnostic. You can use any one of a number of streaming platforms to broadcast and still turn your livestream into a DeepStream. In fact we do not currently offer a broadcasting app. We rely on API access and publicly available embed codes from broadcasting platforms like Ustream and YouTube to let users search those platforms and embed livestreams they find into our viewing platform.

In this sense DeepStream is more like a viewing and curating platform for livestreams than a livestreaming platform. When you compare the DeepStream viewing experience to existing options the differences are quite obvious. We are focused on an engaging, informative viewing experience that tells a deep story about the topic. We are unable to include chat from all livestream platforms, so at the moment it is not a platform that facilitates communication between viewer and broadcaster. It does feature Twitter timelines, however, to allow public discussion.

DeepStream probably isn’t appropriate for all types of livestreams. It wouldn’t make a lot of sense for a celebrity to broadcast to fans using the platform since it removes the ability to chat, which seems to be important for that genre. But we think it works quite well for that subset of livestreams which are about telling an important story, or showing important events, or educating people about a topic. It would also work very well for concerts or even eSports, where the focus would be on enhancing the viewing experience for fans through fun, interesting information they can browse while watching the event.

There is an obvious criticism here, which is why I call this a provocation. We assume that if an embed code is publicly available for a livestream, then the broadcaster is ok with other people embedding the video and adding their own context to it. We’re not the first to add context to livestreams like this, but we’re probably the first to make it so easy. To us, if viewers create many different narratives around live videos from a big event, then that’s a good thing for deepening the discussion, but livestreamers may not feel that way if they disagree with the way their video is being framed. We’re happy to talk to broadcasters who aren’t comfortable with that, and we are open to creating a “do not embed” list if streamers can’t disable embed codes on their native streaming platform.

We have tried to combine our research on watching and broadcasting livestreams with the design flexibility afforded by HTML5 video and JavaScript APIs to create a new remixing and viewing platform for live video. Our research question has been: Can we create a platform to easily add relevant contextual information to livestreaming video? Our motivation for this research has been a hunch that doing so could make livestreaming a more effective medium for experiencing and learning about significant events, and potentially for changing how people think about those events.

Our research question has been: Can we create a platform to easily add relevant contextual information to livestreaming video?

In addition to this unique core experience, we think we’ve built the first vertical search engine for livestreams on the web. Because livestreams are so ephemeral and require API access to monitor, they are a real challenge for big search engines like Google to find and index quickly enough. And while a few livestreams are shared on social media sites like Twitter, this is only a tiny fraction of the live video content available at any given moment. Even worse, the search functionality on some of the existing streaming platforms is very basic, so you will miss relevant content searching directly on some sites. In this sense livestreaming represents a very interesting and niche search challenge, and it’s one that we think is very important for anyone interested in exploring events that are happening right now.

We think there’s a largely undiscovered world of livestreaming content and experiences out there, and that most people are not aware just how vast it is. This content can have a big impact on how we perceive and think about major events. It is our hope that gradually people will start using DeepStream to find this content more easily and augment the livestreams they are most interested in with deep contextual backstories that enhance the narrative richness of the video, like an explainer news article for live video. We’ve tried to make this process very easy and intuitive. We hope that through viewing and creating DeepStreams people will feel more connected to global events, more informed about the world around them, and more amazed by the human desire to share our experiences with each other.

Livestreaming is so new, and the norms of use around it are changing so rapidly, that we really don’t know how it will impact television, or activism, or our social networks, or anything else for that matter.

While it’s exciting to think about livestreaming and build new tools to explore the potential future of the medium, it’s also important to stay connected to the long history of media practice and use. Adding context to video goes back to the days of intertitles in silent films and movie theater narrators performing live “explainers” to the audience. Curating media, and the relationship of journalism to this activity, also has a long and complicated history. The history of new media suggests that the challenge with emergent forms is that it is impossible to predict the social practices for a new technology and the influence it will have on us. In fact these can be the exact opposite of what we anticipate. Susan Douglas calls this the “irony of technology.” Livestreaming is so new, and the norms of use around it are changing so rapidly, that we really don’t know how it will impact television, or activism, or our social networks, or anything else for that matter.

DeepStream tries to remedy a few shortfalls that we perceive in the medium right now, but unpredictable norms of practice will have a much larger influence on the uses of livestreaming than any intention we have been able to build into this platform. That’s a very humbling position to be in as someone who likes to research, design, and build new technology. We hope that DeepStream as a provocation will be a helpful point of discussion for people trying to understand the strengths, weaknesses, best uses, and best practices of livestreaming.

Gordon Mangum is a student in the Media Lab’s Center for Civic Media. He was previously Country Director of Internews Sudan, which built a network of six community radio stations in South Sudan and border areas of Sudan. His interests include developing and improving information systems, participatory civics, and music. Gordon holds a dual BA from the University of Virginia in philosophy and religious studies.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

MIT MEDIA LAB
MIT MEDIA LAB

Published in MIT MEDIA LAB

The MIT Media Lab is one of the world’s leading research and academic organizations, where designers, engineers, artists, and scientists strive to create technologies and experiences that enable people to understand and transform their lives, communities, and environments.

MIT Media Lab
MIT Media Lab

Written by MIT Media Lab

The Media Lab is home to an interdisciplinary research culture where art, science, design, and technology build and play off one another.

Responses (2)

What are your thoughts?