WebRTC M58, currently available in Chrome’s beta channel and as native libraries for Android and iOS, contains over 20 new features and over 60 bug fixes, enhancements and stability/performance improvements. As with previous releases, we encourage all developers to run versions of Chrome on the Canary, Dev, and Beta channels frequently and quickly report any issues found. Please take a look at this page, for some pointers on how to file a good bug report. The help we have received has been invaluable!

The Chrome release schedule can be found here.

Important PSAs

Fixed M57 regression in bandwidth limitation

A regression has been fixed that was introduced M57 and affects in how the “b=AS” SDP attribute is handled. This SDP attribute can be used to limit the maximum total bandwidth used by a media stream, and it’s should be possible to set different values for different m sections. For details and a workaround for M57, see this PSA.


Spec-compliant RTCPeerConnection.getStats

The promise-based getStats is released which (unlike the callback-based getStats) returns stats that follow the spec (Statistics Model, Identifiers for WebRTC’s Statistics API). Most but not all stats are supported. The callback-based getStats is still available but we aim to deprecate it in a future release.

Adding support of Opus 120ms encoding in WebRTC

Starting from version 1.2-alpha, Opus supports direct encoding of audio frames with a duration up to 120 ms. Chromium does not include this Opus version in its third party dependency yet, but WebRTC has built up infrastructure to support 120ms encoding. Longer frames reduce transport overhead on low capacity networks. To utilize the upcoming Opus 120ms frame encoding, audio network adaptor has also been updated to allow frame length to adapt to 120ms.


The setConfiguration method allows an application to modify the RTCConfiguration of an RTCPeerConnection. Specifically, it allows changes to the ICE servers and ICE transport policy, e.g., to specify new TURN credentials when the existing credentials expire. Previously, there was no workaround for this scenario aside from a full teardown of the connection. Another use case is changing the ICE transport policy depending on the phase of a call. For example, a call may begin with only relay connections (either to speed up call setup or to protect the user’s privacy), then later switch to the “all” policy. Note that the iceCandidatePoolSize member of RTCConfiguration is still unsupported, but is planned to be implemented in M59.

Authenticated HTTP proxy

WebRTC can now make use of proxies that require explicit credentials, as long as the user has authenticated against said proxy at least once before initiating a WebRTC connection. Proxies that use Single-Sign-On or that do not require authentication continue to work as expected.

Script for generating WebRTC Android Library (.aar)

There now exists a script for generating WebRTC Android library (.aar). The script exists in tools-webrtc/android/ and can be used to generate .aar-file that can be included in an Android Studio project that uses WebRTC. More information on how to use the script can be found at the top of the script.

Enable VP9 support in WebRTC HW decoders

There are existing VP9 HW decoder implementations in Chrome, i.e. DXVA based in Windows and V4L2/VAAPI based in Chrome OS. It is now possible to use these as external video accelerators in WebRTC calls where VP9 is used.

New video jitter buffer

The video jitter buffer has been rewritten from scratch. The new jitter buffer has been implemented as five classes (PacketBuffer, NackModule, FrameObject, RtpFrameReferenceFinder, FrameBuffer). The benefits of the new jitter buffer are lower code complexity resulting in easier maintenance and tuning. It opens up the possibility to implement transports other than RTP. The new design will enable the reduction of memcpys, but this requires further work. There should be no significant difference in behavior between the current and the new video jitter buffer.

Audio output debug recording

A debug recording of audio output is now generated when checking “Enable diagnostic audio recordings” in chrome://webrtc-internals. File recording of playout audio happens in the browser process close to the OS/sound system.

More details can be found on Google Groups -WEBRTC