Skip to content

WebRTC Digest – Week of May 6

Here at vLine HQ, we have an internal mailing list that we use to share links to articles, mailing list posts, code checkins, and other interesting tidbits related to WebRTC.

Now that we finally have a blog, we thought it might be nice to share these links with you, dear reader (Hi Mom!). So, watch this space each Monday for a new, carefully-curated batch of week-old WebRTC news (how’s that for real-time?). With no further ado, here’s our first edition:

Exploding Endpoints and Spiders

Kelly Teal, writing for Channel Partners asks What in the World is WebRTC? and polls a few analysts for answers:

When it comes to video collaboration, everyone is talking about WebRTC. But what is WebRTC and what does it mean for partners?

 â€Š

“What WebRTC does do is create the potential for an explosion of browser-based video endpoints, which will only connect to other endpoints running the same standard,” Wainhouse Research analyst Bill Haskins told Channel Partners.

We certainly agree about the explosion of browser-based endpoints. Genband, however, disagreed that they will only talk to others running the same standard by announcing SPiDR, a new legacy-to-WebRTC gateway. Gary Audin, writing for NoJitter has the story:

SPiDR sits at the edge of the operator’s network. It provides open, web-centric APIs that allow application developers to produce rich communications services through the network including voice, video, presence, shared address book, call history, instant messaging, and collaboration. 

Google IO 411

Google’s annual developer shindig is coming to town later this week, and WebRTC Tech Lead Justin Uberti will be returning to the stage with another sure-to-be-packed-to-the-rafters session on our favorite real-time stack (in case you missed his session last year, you can check out the video here).

This year he’ll be co-presenting with Chrome Developer Advocate and HTML5 Rocks contributor Sam Dutton. The two will also be running a  hands-on Code Lab to help lucky ticket-holders translate the alphabet soup of WebRTC APIs and protocols into tasty web apps.

In this codelab, we’ll help you get to grips with the core APIs and technologies of WebRTC: – MediaStream (aka getUserMedia): what is it and how can I use it? – RTCPeerConnection: what is important about WebRTC’s most powerful API? – RTCDataChannel: how can I set up real-time communication of arbitrary data? – signalling: what is it and how do I set it up? – servers: what do I need for signalling, STUN, and TURN?

Bird Food

Speaking of tasty treats, Justin warmed up for IO by dishing out a trio of long-awaited morsels destined for Chrome Canary on the discuss-webrtc mailing list. On Wednesday, he hinted that reliable data channels are landing soon (if you want to be the first to know when they alight, you can star Issue 1493).

Then on Friday, he made our week by announcing that you’ll soon be able to use the getStats() API to discover which ICE candidates the browser selected. No more tcpdump or verbose logging to figure out if you’re using a relay server!!! Are you as excited as we are?

And last, but not least, he confirmed that Canary has gained the ability to load-balance across multiple TURN servers. Full thread here.

Richard Mentor Johnson?

Finally, we have a couple of codec-related tidbits. WebM Product Manager and former interim On2 CEO Matt Frost posted that VP9 is nearing completion. We don’t know what we’re more excited about, the support for depth channels (3D webcams + depth maps + WebGL = ???) or the VP9 vs H.265 codec wars.

Since it’s not seeming especially likely that the browser vendors are going to agree on a common built-in codec this decade, Mozilla’s push for an all-javascript codec is starting to sound more and more reasonable. Following up on Brendan Eich’s Today I Saw the Future post about ORBX.js , Peter Bright, wrote a nice piece for Ars Technica with more technical detail:

For browsers including Internet Explorer 10 and Safari on iOS, ORBX is used in I-frame-only mode. For other browsers, including Firefox and Chrome, it uses a more conventional mixed mode. That’s because the mixed mode depends on WebGL for part of its decoding. I-frames can be encoded entirely in JavaScript, but P-frames require the use of shader programs due to their greater complexity. Internet Explorer 10 and Safari on iOS don’t support WebGL, and so can’t be used to run shader programs. As a result, they use about twice as much bandwidth for the same level of video quality.

Sadly, no public demo yet. But we can’t wait to try it on our iPhone 12s.

Free vLine/WebRTC Consulting and Training

One of our biggest priorities is to make it as easy as possible to get started with WebRTC and the vLine platform. To that end, we’re giving away five free full-day consulting and training sessions to developers working on projects utilizing WebRTC and vLine that will go live by June 30.

For developers in the continental US, we will come to your office, sit down beside you and do whatever it takes to make your project successful. For those in other parts of the world, we’ll do the same thing via video chat and screen sharing.

If you’re interested, please send an email to [email protected] telling us what you’re building and how you would make use of the consulting time.

On May 10, we’ll review the requests and select the five developers. Priority will be given to projects where development resources have already been allocated and that will be launching soon.

We look forward to hearing from you!

vLine Just Got Easier

We’re happy to announce a major milestone for the vLine platform: You can now create a private-label video chat service in about a minute, without writing any code.

And to make it even easier to take the next step and integrate it with your website, we’re giving away five free full-day, on-site consulting and training sessions to qualifying projects (full details here).

To create your video chat service, go to vline.com and click the big ‘Get Started’ button. Or to see how it works, check out this video: