Publicly and Freely Available MPEG Standards

If you’ve ever been in a situation where you needed access to MPEG standards, you probably discovered very quickly that they’re not easy to get. Standards created by MPEG working groups are published by ISO/IEC and the vast majority of them are only available for purchase to non-members through the official ISO/IEC web store, where many of the standards cost as much as several hundreds of Swiss francs per PDF copy.

Luckily, MPEG does make some of their standards available publicly for free. However, that index can be surprisingly difficult to find online. Since apparently my blog’s SEO score is somehow higher than MPEG’s when it comes to streaming media technologies, here is a link to help more folks find MPEG’s official list of publicly available standards:

https://standards.iso.org/ittf/PubliclyAvailableStandards/

Another helpful tip: any MPEG standards developed jointly with ITU get published through both ISO/IEC and ITU, and in many cases where the ISO/IEC version is available only for purchase – the ITU version of the same standard is available for free at https://www.itu.int/hub/pubs/. Even in cases where ITU charges money for the latest (current) edition of the standard, an older superseded edition may be available for free, so always make sure to check several of the most recently published editions for a free download. For example, here is the MPEG-2 Part 2 video coding standard available as ISO/IEC 13818-2 for 198 CHF (~$220 USD) and here is the same standard available as ITU-T H.262 for free (2012 edition).

Posted in DASH, H.264, H.265 | Tagged , , , , | Comments Off on Publicly and Freely Available MPEG Standards

Low Video Resolutions in Per-Title Adaptive Encoding

Jan Ozer, producer of Streaming Learning Center courses and contributing editor of Streaming Media magazine, reached out to me recently with an interesting question:

If using per-title video encoding (or any other multi-bitrate encoding approach where resolutions are dynamically determined based on video quality metrics rather than predetermined statically) and the encoding algorithm suggests that for a given video source even the lowest bitrates in the encoding ladder could be encoded at 1080p resolution with satisfactory visual quality – is there any reason why one would still want to include resolutions below 1080p in that encoding ladder?

My short answer was: YES.

For my long answer (as well as contributions from David Ronca, Derek Prestegard and Fabio Sonnati) check out Jan’s article Choosing the Resolution for Lower Rungs on Your Encoding Ladder.

Posted in DASH, H.264, H.265, HLS | Tagged , , , | Comments Off on Low Video Resolutions in Per-Title Adaptive Encoding

SCTE-35: In-Band Event Signaling For Live OTT

I recently did a talk about SCTE-35 at the local Seattle Video Tech Meetup. If you’re interested in a basic overview of the SCTE-35 standard for in-band event signaling in live streams, here’s the slide deck:

PDF download: SCTE-35 In-Band Event Signaling For Live OTT

Posted in DASH, HLS | Tagged , , , , | Comments Off on SCTE-35: In-Band Event Signaling For Live OTT

Understanding HLS Versions and Client Compatibility

HTTP Live Streaming protocol, better known as HLS, was originally created by Apple for the launch of iPhone 3. Since its original introduction in 2009, Apple has maintained the official HLS protocol specification as an informational IETF Internet Draft, updating it at least bi-annually to maintain its status as an IETF working document. (Despite its now 7-year stint as an IETF working document, HLS has never been officially ratified as a standard by any industry organization, therefore it still remains a de facto standard in the streaming world.)

The following table is a summary of all the updates Apple has made to the official HLS specification to this date:

HLS Protocol Version IETF Draft Version Publish Date #Tags, Attributes or Features Introduced or Removed *
1 00 5/1/2009 #EXTINF, #EXT-X-TARGETDURATION, #EXT-X-MEDIA-SEQUENCE, #EXT-X-KEY, #EXT-X-PROGRAM-DATE-TIME, #EXT-X-ALLOW-CACHE, #EXT-X-STREAM-INF, #EXT-X-ENDLIST
01 6/8/2009
02 10/5/2009 #EXT-X-DISCONTINUITY
2 03 4/2/2010 #EXT-X-VERSION, IV
04 6/5/2010
3 05 11/19/2010 floating point segment durations
06 3/31/2011 #EXT-X-PLAYLIST-TYPE
4 07 9/30/2011 #EXT-X-BYTERANGE, #EXT-X-MEDIA, #EXT-X-STREAM-INF, #EXT-X-I-FRAMES-ONLY, #EXT-X-I-FRAME-STREAM-INF, alternate audio/video renditions
08 3/23/2012
5 09 9/22/2012 #EXT-X-MAP, X-TIMESTAMP-MAP, KEYFORMAT, KEYFORMATVERSIONS, WebVTT subtitles, sample AES encryption
10 10/15/2012
11 4/16/2013
6 12 10/14/2013 #EXT-X-DISCONTINUITY-SEQUENCE, #EXT-X-START, PROGRAM-ID, CEA-608 service channels
13 4/16/2014 #EXT-X-INDEPENDENT-SEGMENTS
7 14 10/14/2014 #EXT-X-SESSION-DATA, #EXT-X-ALLOW-CACHE, CEA-708 service channels
15 4/15/2015 FRAME-RATE, E-AC-3 codec support
16 4/15/2015
17 10/16/2015 #EXT-X-SESSION-KEY
18 11/19/2015
19 4/4/2016 #EXT-X-DATERANGE, SCTE-35 signaling

* Some attributes and features may have been omitted for conciseness.

Starting in 2010 the HLS specification introduced the concept of HLS protocol versioning in an effort to manage HLS client compatibility. What’s been surprising, however, is just how frequently HLS protocol versioning has been misunderstood by implementers. Over the course of working with HLS I’ve heard many customers/partners/vendors say things like:

“Our HLS client is only v3 compatible. Can you give us an HLS playlist version without WebVTT subtitles so it doesn’t break our client?”

“This HLS playlist says it’s v3 but we found it contains #EXT-X-INDEPENDENT-SEGMENTS tag which is a v6 feature. You need to change your declared version to 6.”

“Our HLS content has multiple audio languages so you must package it as v4.”

And if you take a look at the table above, these statements seem to make sense. WebVTT subtitles were introduced in version 5 so they shouldn’t be present in a v3 playlist, right? #EXT-X-INDEPENDENT-SEGMENTS tag was introduced in v6 so it shouldn’t be included in a v3 playlist, right? Multiple audio languages require v4 packaging because that’s when alternate renditions were introduced, right?

Wrong. 🙂

This common misinterpretation of the HLS spec is rooted in the assumption that the purpose of HLS protocol versions is to ensure certain features work only on clients which support them. But that is not the case. The purpose of HLS protocol versions is to ensure certain features don’t break older clients, which is a fundamentally different problem.

Nearly every communications protocol has among its design goals the goal to minimize compatibility issues between clients and services as the protocol evolves over time. The HLS protocol accomplishes this forwards compatibility goal by setting forth two essential client implementation requirements (section 6.3.1: General Client Responsibilities):

Clients MUST ensure […] that the EXT-X-VERSION tag, if present, specifies a protocol version supported by the client; if either check fails, the client MUST NOT attempt to use the Playlist, or unintended behavior could occur.

To support forward compatibility, when parsing Playlists, Clients MUST:

  • ignore any unrecognized tags.
  • ignore any Attribute/value pair with an unrecognized AttributeName.
  • ignore any tag containing an attribute/value pair of type enumerated-string whose AttributeName is recognized but whose AttributeValue is not recognized, unless the definition of the attribute says otherwise.

These basic tenets allows Apple to introduce new HLS features in the form of new tags (e.g. #EXT-X-DISCONTINUITY, #EXT-X-START, etc.) or new attributes (e.g. VIDEO, AUDIO, SUBTITLES, FRAME-RATE) without breaking old clients. It also enables protocol extensibility as it allows organizations to add proprietary tags (e.g. #EXT-X-SCTE35 for ad break signaling) while remaining compatible with existing HLS clients.

The challenge with designing any protocol over a long period of time, of course, is handling breaking changes. What happens when you need to introduce a feature which fundamentally alters previously established concepts or understandings? A well implemented HLS client is expected to ignore any tags or attributes it doesn’t recognize, so the only situation in which a compliant client would need to be warned about something new is if the change somehow defied existing assumptions and expectations. This is where protocol versioning comes in. Rather than increment the protocol version every time a new HLS feature is introduced, Apple only increments protocol versions when changes are introduced which break backwards compatibility.

For example, up until November 2010 the HLS spec only allowed segment durations (#EXTINF values) to be defined as integer values. When it became evident that greater precision was required, Apple updated the spec (version 05) to allow floating point values for #EXTINF. Since doing so changed the previous definition of #EXTINF, it also created the risk of breaking any existing HLS client which enforced the old integer requirement. So in order to shield old clients from this breaking change Apple incremented the protocol version requirement to 3 for any HLS playlists which use floating point values for segment durations. If implemented correctly, a v2 client should refuse to play a v3 playlist because it can’t guarantee successful playback. And if a v2 client does support floating point #EXTINF values… well then it should declare itself v3 compatible.

So what are these backwards-compatibility breaking features which force HLS version increments? Fortunately, the HLS specification provides a very unambiguous answer to this question in Section 7: Protocol version compatibility from which we can compile this table:

If M3U8 playlist uses… You must declare at least version…
IV attribute of the EXT-X-KEY tag 2
Floating-point EXTINF duration values 3
EXT-X-BYTERANGE tag 4
EXT-X-I-FRAMES-ONLY tag 4
KEYFORMAT and KEYFORMATVERSIONS attributes of the EXT-X-KEY tag 5
EXT-X-MAP tag 5
EXT-X-MAP tag in a playlist that does not contain EXT-X-I-FRAMES-ONLY 6
“SERVICE” values for the INSTREAM-ID attribute of the EXT-X-MEDIA tag 7

 

In determining HLS protocol version, this is the only table that matters. It doesn’t matter which version of the HLS spec introduced a particular feature. You only need to increment your declared HLS version if your playlist contains any of the tags, attributes or features listed above.

This is why, consequently, an HLS playlist utilizing a feature such as WebVTT subtitles doesn’t need to be declared as version 5. In fact, it doesn’t even need to be declared any higher than version 1. Why is that? Well, in order to define WebVTT subtitles in a master playlist one must use the #EXT-X-MEDIA tag. Since the #EXT-X-MEDIA tag is not considered a backwards-compatibility breaking tag, it poses no risk to clients which don’t support it. A truly spec compliant client that doesn’t support #EXT-X-MEDIA is expected to ignore all unknown tags and proceed without them. Therefore, a playlist containing #EXT-X-MEDIA definitions of WebVTT subtitles is not obligated to “warn” a client about potential compatibility issues.

Note that this doesn’t guarantee at all that a client consuming such a playlist – regardless of its declared version number – will have the ability to correctly ingest, process and render WebVTT subtitles. Similarly, stating #EXT-X-VERSION:4 does not guarantee that a compatible client will be able to switch between multiple audio languages just because they’re present in the playlist. The goal of HLS versioning isn’t to ensure fully-featured playback but to prevent catastrophic playback failure, and in that respect failure to render subtitles or switch audio languages isn’t considered catastrophic. This may seem like an odd distinction, and the HLS spec sort of acknowledges it by using alternate media renditions as example: “The EXT-X-MEDIA tag and the AUDIO, VIDEO and SUBTITLES attributes of the EXT-X-STREAM-INF tag are backward compatible to protocol version 1, but playback on older clients may not be desirable.  A server MAY consider indicating a EXT-X-VERSION of 4 or higher in the Master Playlist but is not required to do so.”

In other words: HLS versioning can guarantee that your content will play back, but it can’t guarantee that the experience will be perfect. 🙂

Posted in HLS | Tagged , , , | 1 Comment

Understanding Latency in HTTP-based Adaptive Streaming

Jan Ozer of Streaming Media magazine contacted me recently with a simple question about Smooth Streaming and HLS latency… and I provided a more elaborate reply than he perhaps anticipated. Here is my response in its entirety, for your reading pleasure:

http://www.streaminglearningcenter.com/blogs/understanding-abr-latency-a-guest-post-from-alex-zambelli.html

P.S. I fully recognize the irony of my first official post of 2015 being a promotion of a blog post I sort of accidentally wrote for somebody else’s blog. 🙂

 

Posted in HLS, Smooth Streaming | Tagged , | Comments Off on Understanding Latency in HTTP-based Adaptive Streaming

Partying Like It’s 2007

I promise to do a better job in 2015 of posting to this blog more frequently. 🙂 In the meantime you can find me on Twitter, where I will be posting and sharing content relevant to streaming media technologies. I am @zambelli24fps.

Posted in Smooth Streaming | Comments Off on Partying Like It’s 2007

Musings On the Future of Live Linear Television

I’ve written an article for The Guardian’s media network blog about current trends in TV content distribution, how they’re affecting our viewing habits, and how ultimately those will affect the future of live linear TV. You can read it here:

http://www.theguardian.com/media-network/media-network-blog/2014/jul/07/cord-cutting-internet-tv-netflix

This is actually the second article I’ve written for The Guardian. The first one, published a year ago, reminisced about the early days of Internet video streaming, how far we’ve come and how much more we still have to grow:

http://www.theguardian.com/media-network/media-network-blog/2013/mar/01/history-streaming-future-connected-tv

Posted in H.264, H.265, Smooth Streaming | Comments Off on Musings On the Future of Live Linear Television

Baptism of Fire in the Olympic Cauldron

The Olympics are upon us – again! Even though the ancient Greeks set the Olympic schedule 4 years apart (in fact, the literal definition of an Olympiad is “4 years”), the modern Olympics since 1992 have been only 2 years apart due to the interleaving of the Summer and Winter games. In 2008 I got involved in my first Olympics streaming project (while at Microsoft, for NBC), and here we are 6 years later and I am now involved in my 4th consecutive Olympics project, this time with iStreamPlanet and again for NBC. Incidentally, it is also iStreamPlanet’s 4th consecutive Olympics, but this time it’s a particularly special occasion. This time we’re encoding it with our own software.

What makes the Olympics stand out in the world of live video streaming is their formidable combination of volume and (in)frequency. Whereas most live sporting events take place on a weekly or monthly basis and wield modest viewership numbers, the Olympic Games are recurring 2-year events that can feature up to 30-40 concurrent live streams and attract millions of viewers over the course of 2 weeks. That means that anybody wishing to prepare for such an event must plan for 2 weeks of high volume operations (including capex and staffing), but then also be ready to give it all up after the 2 weeks of excitement are over. Such need for extreme elasticity and scalability makes these Olympic projects perfect candidates for cloud computing – and cloud encoding in particular.

Another fascinating aspect of Olympics streaming is that it happens just sporadically enough to truly show how far streaming technology has advanced since the previous Olympics. Every Olympics streaming project pushes the envelope of streaming media technology, making each and every Olympics feel like a brand new baptism of fire. When I was involved in the 2008 Beijing Olympics, the live video streaming technology was still Windows Media based and streaming to mobile devices seemed like an extravagant idea. The live video was compressed with VC-1 codec and it peaked at 592×336 resolution (600 kbps), which at the time (before live adaptive streaming) was deemed about as high as we could realistically go without risking last-mile delivery issues. We couldn’t even dream of SD 480p streaming, let alone HD 720p streaming.

Fast forward 2 years to Vancouver 2010 Olympic Games – and the streaming technology had already advanced by leaps and bounds. By February 2010 Microsoft had officially launched Smooth Streaming, its HTTP-based adaptive streaming technology, and 720p HD streaming video was now officially a reality. iStreamPlanet did the live encoding for those Olympics, using Inlet/Cisco encoders and VC-1 codec, and it was quite possibly the best live video streaming the world had seen at that point. Microsoft’s subsequent 2012 London Olympic efforts iterated on those Vancouver foundations by replacing VC-1 with H.264 and for the first time introducing Windows Azure’s cloud computing potential on the origin/services side. Which brings us to 2014 Sochi.

iStreamPlanet’s involvement in these 2014 Olympics consists of:

  • acquiring IP-based video feeds from NBC/OBS
  • scheduling Aventus channels in our internal CMS
  • encoding live multi-bitrate video in Aventus for delivery to Windows Azure Media Services
  • inserting ad markers into live streams to enable downstream ad insertion scenarios

The live Olympic video feeds are delivered to us over a private fiber IP connection as 20-25 Mbps 1080i H.264-compressed MPEG-2 Transport Streams. Aventus, our cloud-based video encoding software, then ingests those MPEG transport streams and transcodes them to 7 different video bitrates before publishing them to Windows Azure Media Services entry points.

Aventus “talks” to WAMS live publishing points using Smooth Streaming as its transfer protocol, but WAMS then uses its Dynamic Packaging feature (popularly known as Dynamux) to re-multiplex the Smooth streams to Apple HLS and Adobe HDS formats for delivery to end users. The live content is pulled from WAMS origins by Akamai CDN and distributed throughout their network to viewers playing back NBC Olympics content on Windows, MacOS, iOS, Android and Windows Phone devices.

2014 NBC Olympics Live Streaming Workflow

2014 NBC Olympics Live Streaming Workflow

 

The Smooth/HLS/HDS live streams are compressed using H.264 and AAC codecs for video and audio, respectively. Audio is encoded as 56 kbps stereo HE-AAC v1, whereas video is delivered as H.264 in multiple bitrates/resolutions:

Bit Rate (kbps) Resolution
3450 1280×720
2200 960×540
1400 960×540
900 512×288
600 512×288
400 340×192
200 340×192

 

This somewhat odd encoding profile (where particular resolutions are duplicated) was specifically created in order to minimize the number of client-side resolution changes, but otherwise follows fairly standard bitrate progressions.

Another interesting aspect of the Aventus Olympic workflow is ad insertion. Aventus receives REST-based API calls for ad insertion from ad operators via its Video CMS, which it converts to XML-formatted SCTE-35 cue messages and inserts into Smooth Streaming sparse tracks. Those ad markers are then converted to HLS and HDS ad markers by WAMS which downstream video players can interpret and take appropriate actions.

The most groundbreaking thing about these Olympics is that they are the first Olympics to be streamed (acquired, encoded, packaged, delivered) entirely in the cloud. If you’re in the U.S., I hope you’re enjoying watching the Sochi Olympics online. If not… well, there’s always Rio 2016. 🙂

Posted in Aventus, H.264, Olympics, Smooth Streaming, Windows Azure Media Services | 3 Comments

Introducing iStreamPlanet Aventus

As I glance at my blog I realize it’s been 7 months since my last post, which is probably as good of a sign as any that I should probably finally tell you a little bit about what I’ve been up to since I left Microsoft. 🙂

iStreamPlanet, for most of its 13-year history, has been predominantly a streaming service company, focused on providing the best-in-breed streaming media services for live events and live linear channels: live signal ingest, video encoding (live and VOD), origin server hosting, DRM packaging and licensing, video workflow automation, Xbox app development, etc. A little over a year ago, iStreamPlanet decided to diverge and invest (thanks to its Series A investors) into something brand new, aiming to reduce the complexity of live video delivery workflow through software development – and that’s how Aventus was born.

Aventus officially marks iStreamPlanet’s foray into the live video encoder market. Aventus is a software-based live video encoder designed to run specifically in virtualized environments (currently Microsoft Hyper-V) on x86-64 commodity hardware and focused exclusively on IP-based video delivery scenarios. It’s different from most other live video encoders in that:

  • It supports only IP-based signal ingest (e.g. M2TS/UDP, RTMP) and therefore isn’t burdened with legacy support of video capture hardware devices
  • It’s focused primarily on over-the-top delivery: it outputs only modern, HTTP-based adaptive streaming protocols such as Microsoft Smooth Streaming, Apple HTTP Live Streaming, Adobe HDS and (eventually) MPEG-DASH
  • It’s designed from ground up to work in virtualized environments, allowing it to take advantage of grid computing and to easily scale resources according to need, deployable in either private data centers or public clouds
  • It’s entirely CPU-based, which means that it can run on any x86-64 commodity hardware thus significantly reducing capital costs for customers looking to build out their live encoding capacity

Last 9 months have been a thrilling experience and I look forward to being able to announce more details about Aventus v1.0 at the upcoming IBC Show in Amsterdam next month.

Last but not least… We are hiring! We are currently looking for talented C++ and C# developers with experience in digital media (codecs, formats, DSPs) who are passionate about streaming media and who would enjoy working on a v1 live video encoder product. Our software engineering team is located in Redmond, Washington. If you are interested, please contact me by e-mail and include your resume and/or LinkedIn profile link.

Posted in H.264, Smooth Streaming | 1 Comment

H.265/HEVC Ratification and 4K Video Streaming

OK, so maybe it was a shorter break from blogging than I expected. As it turns out the world does not stop when I change jobs. 😉

The media world today is abuzz with news of H.265/HEVC approval by the ITU. In case you’ve been hiding from NAB/IBC/SM events for the past two years – or if you’re a WebM hermit – I will have you know that H.265 is the successor standard to H.264, aka MPEG-4 AVC. As was the case with its predecessor it is the product of years of collaboration between the ISO/IEC Moving Picture Experts Group (MPEG) and the International Telecommunications Union (ITU) Video Coding Experts Group (VCEG). The new video coding standard is important because it promises bandwidth savings of about 40-45% for the same quality as H.264. In a world where video is increasingly being delivered over-the-top and bandwidth is not free – that kind of savings is a big deal.

What most media reports seem to have focused on is the potential effect that H.265 will have on bringing us closer to 4K video resolution in OTT delivery. Most reports speculate that H.265 will allow 4K video to be delivered over the Internet at bit rates between 20 and 30 Mbps. In comparison, my friend Bob Cowherd recently theorized on his blog that 4K delivery using the current H.264 video standard would require about 45 Mbps to deliver 4K video OTT.

While I think the relative difference between those two estimates is in the ballpark of the 40% bandwidth savings that H.265 promises, I actually think that both estimates are somewhat pessimistic. Given the current state of video streaming technology, I think we’ll actually be able to deliver 4K video at lower bit rates when the time comes for 4K streaming.

A common mistake that most people dealing with lossy video compression seem to make is to assume that the ratio between bit rate (bps) and picture size (pixels/sec) remains proportional and fixed as the values of both axis change. I don’t think that’s the case. I believe that the relationship between bit rate and picture size is not linear, but closer to a power function that looks like this:

H.264 Bits/Pixel Graph

In other words, I believe that as the pixel count gets higher a DCT-based video codec requires fewer bits to maintain the same level of visual quality. Here’s why:

  1. The size of a 16×16 macroblock, which is the smallest unit of DCT-based compression used in contemporary codecs such as H.264 and VC-1, grows smaller relative to the total size of the video image as the image resolution grows higher. For example,  in a 320×180 video the 16×16 macroblock represents 0.444% of the total image size, whereas in a 1920×1080 video the 16×16 macroblock represents only 0.0123% of the total image. A badly compressed macroblock in a 320×180 frame would therefore be more objectionable than a badly compressed macroblock in a 1920×1080 frame.
  2. As many studies have shown, the law of diminishing returns applies to video/image resolution too. If you sit at a fixed distance from your video display device eventually you will no longer be able to distinguish the difference between 720p, 1080p and 4K resolutions due to your eye’s inability to resolve tiny pixels from a certain distance. Ipso facto, as the video resolution goes up your eyes become less likely to distinguish compression artifacts too – which means the video compression can afford to get sloppier.
  3. Historically the bit rates used for OTT video delivery and streaming have been much lower than those used in broadcasting, consumer electronics and physical media. For example, digital broadcast HDTV typically averages ~19 Mbps for video (in CBR mode), while most Blu-ray 1080p videos average ~15-20 Mbps (in 2-pass VBR mode). Those kinds of bit rates are possible because those delivery channels have the luxury of either dedicated bandwidth or high-capacity physical media. However, in the OTT and streaming world video bit rate has always been shortchanged in comparison. Most 720p30 video streaming today, whether live or on-demand, is encoded at average 2.5-3.5 Mbps (depending on complexity and frame rate). 1080p30 video, when available, is usually streamed at 5-6 Mbps. Whereas Blu-ray tries to give us movies at a quality level approaching visual transparency, streaming/OTT is completely driven by the economics of bandwidth and consequently only gives us video at the minimum bit rate required to make the video look generally acceptable (and worthy of its HD moniker). To put it bluntly, streaming video is not yet a videophile’s medium.

So taking those factors into consideration, what kind of bandwidth should we expect for 4K video OTT delivery? If 1080p video is currently being widely streamed online using H.264 compression at 6 Mbps, then 4K (4096×2304) video could probably be delivered at bit rates around 18-20 Mbps using the same codec at similar quality levels. Again, remember, we’re not comparing Blu-ray quality levels here – we’re comparing 2013 OTT quality levels which are “good enough” but not ideal. If we switch from H.264 to H.265 compression we could probably expect OTT delivery of 4K video at bit rates closer to 12-15 Mbps (assuming H.265’s 40% efficiency improvements do indeed come true). I should note that those estimates are only applicable to 24-30 fps video. If the dream of 4K OTT video also carries an implication of high frame rates – e.g. 48 to 120 fps – then the bandwidth requirements would certainly go up accordingly too. But if the goal is simply to stream a 4K version of “Lawrence of Arabia” into your home at 24 fps, that dream might be closer to reality than you think.

 

One last thing: In his report about H.265 Ryan Lawler writes that “nearly every video publisher has standardized [H.264] after the release of the iPad and several other connected devices. It seems crazy now, but once upon a time, Apple’s adoption of H.264 and insistence on HTML5-based video players was controversial – especially since most video before the iPad was encoded in VP6 to play through Adobe’s proprietary Flash player.” Not so fast, Ryan. While Apple does deserve credit for backing H.264 against alternatives, they were hardly the pioneers of H.264 web streaming. H.264 was already a mandatory part of the HD-DVD and Blu-ray specifications when those formats launched in 2006 as symbols of the new HD video movement. Adobe added H.264 support to Flash 9 (“Moviestar”) in December 2007. Microsoft added H.264 support to Silverlight 3 and Windows 7 in July 2009. The Apple iPad did not launch until April 2010, which was also the same month Steve Jobs posted his infamous “Thoughts on Flash” blog post. So while Apple certainly did contribute to H.264’s success, they were hardly the controversial H.264 advocate Ryan makes them out to be. H.264 was already widely accepted at that point and its success was simply a matter of time.

Posted in H.264, H.265 | 9 Comments