One of my goals over the past few months has been to start live streaming/screencasting my development sessions. I’ve been watching Izac Filmalter’s Twitch stream for the past few days and I think it’s fantastic! Streaming can benefit both the streamer (me) and the community at large (you). Developing to an audience would force me to polish not only the products I create, but also the process I go through to create them. Explaining what I’m doing to you would reinforce my understanding of the topic, and hopefully provide you with some value as well. Plus, in a pinch I can use you as a rubber ducky.
There are a few existing options when it comes to streaming services and platforms, but all of them have their drawbacks.
On Air Hangouts - Google’s On Air Hangouts are essentially hangout sessions that you can broadcast to any number of viewers for free. Unfortunately, On Air stream quality maxes out at 720p.
Youtube Live - Youtube Live is the best alternative to Twitch. They support up to 1080p streaming, but like Twitch, they require that you manually stream your content directly to their RTMP server.
So, what’s a developer to do when he wants to screencast differently?
BYOPWWRTC (Build Your Own Platform With WebRTC)
WebRTC is a newly emerging browser technology that (among a huge number of other things) allows for the capture and streaming of video and audio directly within the browser. Since WebRTC works directly within the browser, no external streaming software is required. WebRTC can set up to peer-to-peer streaming directly to a small number of viewers, or the stream can be routed through a centralized server (MCU) to be broadcast to a wider audience. Building a screencasting platform on WebRTC seems like an obvious choice. We get the ease of use of browser based streaming, up to 1080p quality, and the possibilities of both peer-to-peer and centralized architectures.
Imagining the finalized product, peer-to-peer streaming would always be free as it puts no load on the system (other than page views). Depending on your machine/network capabilities, you will have a limited number of viewers to whom you can effectively peer-to-peer stream. After that limit is reached, the system would suggest you start using a centralized streaming model. This may need to be a paid service to support the MCU infrastructure.
Since this would be a platform targeted towards developers, I can imagine a variety of cool features like Github/Bitbucket integration, similar screencaster discovery (based on stack, framework, language, project, etc…), in-chat syntax highlighting, and many others.
- Google I/O 2013 WebRTC Presentation
- Slides for above presentation
- WebRTC Experiments
- Jitsi Videobridge
At the moment this is nothing more than an idea. I am by no means a WebRTC expert. I’ve only recent started researching this topic, but WebRTC seems like an ideal solution for this problem. In the very near future, I will hopefully start building out a proof of concept of the peer-to-peer streaming system. Once that’s up and running I would be able to stream the continued developement of the project. Stay tuned!