It worked twice and will work again

On the 30th anniversary of the first MPEG meeting I wrote a paper that extolled the virtues of MPEG in terms productivity – in absolute and relative terms – and of actual adoption. In this paper I would like to expose the vision that has driven MPEG since its early days that explains its success.

Today it is hard to believe the state of television 30 years ago. Each country retained the freedom to define its own baseband and transmission standards for terrestrial, cable and satellite distribution. The baseband and distribution standards of package media (video cassettes, laser discs etc.) were “international” but controlled by a handful of companies (remember Betamax and VHS).

With digital technologies coming of age, everybody was getting ready for more of the same. The only expected difference was the presence of a new player – the telecom industry – who saw digital as the way to get into the video distribution business.

Figure 1 depicts the situation envisaged in each country or region or industry: the digital baseband would, as a matter of principle, be different.

Figure 1 – Digital television seen with analogue eyes

The MPEG magician played a magic to the global media industry saying: look, we give you a single standard for the digital baseband that works for telecom, terrestrial, cable, satellite and package media distribution. There would be a lot to say about how a group of compression experts convinced a global industry worth hundreds of billion USD, but let’s simply say that it worked

Figure 2 shows how different was the digital television industry from the one depicted in Figure 1: all industries used the same media compression layer.

Figure 2 – The digital television distribution

MPEG managed the standards of the media compression layer for the 5 industries and a new “media compression layer industry” – global this time – was born partly from existing pieces and partly from entirely new pieces.

This was only the beginning of the story because, in the mean time, internet had matured to a (kind of) broadband distribution infrastructure for the fixed and the mobile internet. MPEG took notice and developed the standards that would serve these new industries while still serving the old ones. The picture illustrates the new configuration of the industry which is largely the one that exists today.

Figure 3 – The digital mediadistribution

So the magic worked again. All the industries who were fancied by the MPEG magic have no reason to regret:

  • Digital Media revenues amount to 126.4 B$ in 2018, steadily increasing over the last few years
  • Digital TV and video industry, including e.g. Netflix and Amazon, are expected to be worth 119.2 B$ in 2022, up from 64 B$ in 2017
  • Digital ad spending overtook TV ad spending in 2017 with a record spending of 209 B$ worldwide.

In another paper I made known that the Italian ISO member body has requested ISO to establish a Data Compression Technologies (DCT) Technical Committee (TC). That proposal represents an extension of the model described above and is represented in Figure 4 (the new industries mentioned are the likely first targets of the DCT TC).

Figure 4 – The data compression industry

The DCT TC will provide data compression standards for all industries in need of data compression to do their job better. The field of endeavour called “data compression” generates standard algorithms expressed by abstract languages like mathematical formulae or code snippets for implementation in software or silicon by a variety of application domains.

I look forward forward to the new MPEG magic played by the Data Compression Technologies Technical Committee to provide new records in sustained growth to new industries.

Compression standards for the data industries


In my post Compression – the technology for the digital age, I called data compression “the enabler of the evolving media-rich communication society that we value”. Indeed, data compression has freed the potential of digital technologies in facsimile, speech, photography, music, television, video on the web, on mobile, and more.

MPEG has been the main contributor to the stellar performance of the digital media industries – content and services, and hardware and software. Something new is brewing in MPEG because it is applying its compression toolkit to other non-media data such as point clouds and DNA reads from high speed sequencing machines, and plans on doing the same on neural networks.

Recently UNI – the Italian ISO member body – has submitted to ISO the proposal to create a Technical Committee on Data Compression Technologies (DCT, in the following) with the mandate to develop standards for data compression formats and related technologies to enable compact storage as well as inter-operable and efficient data interchange among users, applications and systems. MPEG activities, standards and brand should be transferred to DCT.

With its track record, MPEG has proved that it is possible to provide standard data compression technologies that are the best in their class at a given time to serve the needs of the digital media industries. By proposing to create DCT, Italy seeks to extend the successful MPEG horizontal standard model to the “data industries” at large, including the media industries.

Giving other industries the means to enjoy the benefits of more data accessed and used by systematically applying standard data compression to all data is not an option, but a necessity. Indeed Forbes  estimates that by 2025 the world will produce 163 Zettabytes of data. What will we do of those data, when today only 1% of the data created is actually processed?

Why Data Compression is important to all

Handling data is important for all industries: in some cases it is their raison d’être, in other cases it is crucial to achieve the goal and in still others data is the oil lubricating the gears.

Data appear in many and various scenarios: in some cases a few sources create huge amounts of continuous data, in other cases many sources create large amounts of data and in still others a very large number of sources create small discontinuous chunks of data.

Common to all scenarios is the need to store, process and transmit data. For some industries early adopters of  digitisation, the need was apparent from the very beginning. For other the need is gradually becoming apparent now.

Let’s see in some representative examples why industries need data compression.

Telecommunication. Because of the nature of their business, telecommunication operators (telcos) have been the first to be affected by the need to reduce the size of digital data to provide better existing services and/or attractive new services. Today telcos are eager to make their networks available to new sources of data.

Broadcasting. Because of the constraints posed by the finite wireless spectrum on their ability to expand their quality and range of services, broadcasters have always welcome more data compression. They have moved from Standard Definition to High Definition then to Ultra High Definition and beyond (“8k”), but also to Virtual Reality. For each quantum step in the quality of service delivered, they have introduced new compression. More engaging future user experiences will require the ability to transmit or receive ever more data, and ever more types of data.

Public security. MPEG standards are already universally used to capture audio and video information for security or monitoring purposes. However, technology progress enables users to embed more capabilities in (audio and video) sensors, e.g. face recognition, counting  of people and vehicles etc., and the sharing of that information in a network of more and more intelligent sensors to drive actuators. New standard data compression technologies are needed to support the evolution of this domain.

Big Data. In terms of data volume, audio and video, e.g. those collected by security devices or vehicles, are probably the largest component of Big Data, as shown by the Cisco study forecasting that by 2021 video on the internet will account for more than 80% of total traffic. Moving such large amounts of information from source to the processing cloud in an economic fashion requires data compression and their processing requires standards that allow the data to be processed independently of the information source.

Artificial intelligence uses different types of neural networks, some of which are “large”, i.e. require many Gigabytes and require massive computational complexity. To practically move intelligence across networks, as required by many consumer and professional use scenarios, standard data compression technologies are needed. Compression of neural networks is not only a matter of bandwidth and storage memory, but also of power consumption, timeliness and usability of intelligence.

Healthcare. The healthcare world is already using genomics but many areas will greatly benefit by a 100-fold reduction of the size and the time to access the data of interest. In particular, compression will accelerate the coming-of-age of personalised medicine. As healthcare is often a public policy concern, standard data compression technologies are required.

Agriculture and Food. Almost anything related to agriculture and food has a genomic source. The ability to easily process genomic data thanks to compression opens enormous possibilities to have better agriculture and better food. To make sure that compressed data can be exchanged between users, standardised data compression technologies are required.

Automotive. Vehicles are becoming more and more intelligent devices that drive and control their movement by communicating with other vehicles and fixed entities, sensing the environment, and storing the data for future use (e.g., for assessing responsibilities in a car crash). Data compression technologies are required and, especially when security is involved, the technologies must be standard.

Industry 4.0. The 4th industrial revolution is characterised by “Connection between physical and digital systems, complex analysis through Big Data and real-time adaptation”. Collaborative robots and 3D printing, the latter also for consumer applications, are main components of Industry 4.0. Again, data compression technologies are vital to make Industry 4.0 fly and, to support multi-stakeholder scenarios, technologies should be standard.

Business documents. Business documents are becoming more diverse and include different types of media. Storage and transmission of business documents are a concern if bulky data are part of them. Standards data compression technologies are the principal way to reduce the size of business documents now and more so in the future.

Geographic information. Personal devices consume more and more geographic information and, to provide more engaging user experiences, the information itself is becoming “richer”, which typically means “heavier”. To manage the amount of data data compression technologies must be applied. Global deployment to consumers requires that the technologies be standard.

Blockchains and distributed ledgers enable a host of new applications. Distributed storage of information implies that more information is distributed and stored across the network, hence the need for data compression technologies. These new global distributed scenarios require that the technologies be standard.

Which Data Compression standards?

Data compression is needed if we want to be able to access the information produced or available anywhere in the world. However, as the amount of data grows, new generations of compression standards are released. In the case of video, MPEG has already produced five generations of compression standards and one more is under development.

MPEG compression technologies have had, and continue to have, extraordinarily positive effects on a range of industries with billions of hardware devices and software applications that use standards for compressing and streaming audio, video, 3D graphics and associated metadata. The universally recognised MP3 and MP4 acronyms demonstrate the impact that data compression technologies have on consumer perception of digital devices and services over the world.

Non-inter-operable silos, however, are the biggest danger in this age of fast industry convergence and only international standards based on common data compression technologies can avoid it. Point Clouds and Genomics show that common data compression technologies can indeed be re-used for different industries. Managing different industry requirements is an art and MPEG has developed it over 30 years for different industries: telecom, broadcasting, consumer electronics, IT, media content and service providers and, more recently, bioinformatics. DCT can safely take the challenge and doi the same for more industries.

How to develop Data Compression standards?

As MPEG has done for the industries it already serves, DCT should only develop “generic” international standards for compression and coded representation of data and related metadata suitable for a variety of application domains so that the client communities can use them as components for integration in their systems.

The process adopted by MPEG should also be adopted by DCT, namely:

  • Identification of data compression requirements (jointly with the target industry)
  • Development of the data compression standard (in consultation with the target industry)
  • Verification that the standard satisfies the agreed requirements (jointly with the target industry)
  • Development of test suites and tools (in consultation with the target industry)
  • Maintainance of the standards (upon request of the target industry).

Data Compression is a very specialised field that many technical and business communities in specific domains are ill-equipped to master satisfactorily Even if an industry succeeds in attracting the necessary expertise, the following will likely happen

  1. The result is less than optimal compared to what could have been obtained from the best experts;
  2. The format developed is incompatible with other similar formats with unexpected inter-operability costs in an era of convergence;
  3. The implementation cost of the format is too high because an industry may be unable to offer sufficient returns to developers;
  4. Test suites and tools cannot be developed because a systematic approach cannot be improvised;
  5. The experts who have developed the standard are no longer around to ensure its maintenance.

Building the DCT work plan

The DCT work plan will be decided by the National Bodies joining it. However, the following is a reasonable estimate of what that workplan will be.

Data Compression for Immersive Media. This is a major current MPEG project that currently comprises systems support to immersive media; video compression; metadata for immersive audio experiences; immersive media metrics; immersive media metadata and network-based media processing (NBMP). A standard for systems support (OMAF) has already been produced, a standard for NBMF is planned for 2019, a video standard in 2020 and an audio standard in 2021. After completing the existing work plan DCT should address the promising light field and audio field compression domains to enable really immersive user experiences.

Data Compression for Point Clouds. This is a new, but already quite advanced area of work for MPEG. It makes use of established MPEG video and 3D graphics technologies to provide solutions for entertainment and other domains such as automotive. The first standard will be approved in 2019 but DCT will also work for new generations of point cloud compression standards for delivery in the early 2020s.

Data Compression for Health Genomics. This is the first entirely non-media field addressed by MPEG. In October 2018 the first two parts – Transport and Compression – will be completed and the other 3 parts – API, Software and Conformance – will be released in 2019. The work is done in collaboration with ISO/ TC 276 Biotechnologies. Studies for a new generation of compression formats will start in 2019 and DCT will need to drive those studies to completion along with other data types generated by the “health” domain for which data compression standards can be envisaged.

Data Compression for IoT. MPEG is already developing standards in the the specific “media” instance of IoT called “Internet of Media Things” (IoMT). This partly relies on the MPEG standard called MPEG-V – Sensors and Actuators Data Coding, which defines a data compression layer that can support different types of data from different types of “things”. The first generation of standards will be released in 2019. DCT will need to liaise with the relevant communities to drive the development of new generations of IoT compression standards.

Data Compression for Neural Networks. Work in this area has just begun. A “Call for Evidence” has been issued in July 2018 to get evidence of the state of compression technologies for neural networks after which a “Call for Proposals” will be issued to get the necessary technologies and develop a standard. End of 2021 can be the estimated time of the first neural network compression standard. However, DCT will need to investigate which other compression standards for this extremely dynamic field.

Data Compression for Big Data. MPEG has already adapted the ISO Big Data reference model for its “Big Media Data” needs. Specific work has not begun yet and DCT will need to get the involvement of relevant communities, not just in the media domain.

Data Compression for health devices. MPEG has considered the need for compression of data generated by mobile health sensors in wearable devices and smartphones to cope with their limited storage, computing, network connectivity and battery. DCT will need to get the involvement of relevant communities and develop data compression standards for heath devices that promote their effective use.

Data Compression for Automotive. One of the point cloud compression use cases – efficient storage of the environment captured by sensors on a vehicle – is already supported the Point Cloud Compression standard under development. There are, however, many more types of data that are generated, stored and transmitted inside and outside of a vehicle for which data compression has positive effects. DCT can offer its expertise to the automotive domain to achieve new levels of efficiency, safety and comfort in vehicles.

The list above includes standards MPEG is already deeply engaged in or is already working on. However, the industries that can benefit from data compression standard is much broader than those mentioned above (see e.g., Why Data Compression is important to all) and the main role of DCT will be to actively investigate the data compression needs of industries, get in contact with them and jointly explore new opportunities for standard development.

Is DCT justified?

The purpose of DCT is to make accessible data compression standards, the key enabler of devices, services and application generating digital data, to industries and communities that do not have the necessary estremely specialised expertise to develop and maintain such standards on their own.

The following collects the key justifications for creating DCT:

  1. Data compression is an enabling technology for any digital data. Data compression has been a business enabler to media production and distribution, tecommunication, and Information and Communication Technologies (ICT) in general by reducing the cost of storing, processing and transmitting digital data. Therefore, data compression will also facilitate enhanced use of digital technologies to other industries that are undergoing – or completing – transition to digital. As it happened for media, by lowering the access threshold to business, in particular to SMEs, data compression will drive industries, to create new business models that will change the way companies generate, store, process, exchange and distribute data.
  2. Data compression standards trigger virtuous circles. By reducing the amount of data required for transmission, data compression will enable more industries to become digital. Being digital will generate more data and, in turn, further increase the need for data compression standards. Because compressed digital data are “liquid” and easily cross industry borders, “horizontal”. i.e. “generic” data compression standards are required.
  3. Data compression standards remove closed ecosystems bottlenecks. In closed environments, industry-specific data compression methods are possible. However, digital convergence is driving an increase in data exchange across industry segments. Therefore industry-specific standards will result in unacceptable bottlenecks caused by a lack of interoperability. Reliable, high-performance and fully-maintained data compression standards will help industries avoid the pitfalls of closed ecosystems that limit long-term growth potential.
  4. Sophisticated technology solutions for proven industry needs. Data compression is a highly sophisticated technology field with 50+ years of history. Creating efficient data compression standards requires a body of specialists that a single industry can ill afford to establish and, more importantly, maintain. DCT will ensure that the needs for specific data compression standards can always be satisfied by a body of experts who identify requirements with the target industries, develop standards, test for satisfactory support of requirements, produce testing tools and suites, and maintain the standards over the years.
  5. Data compression standards to keep the momentum growing . The industries that have most intensely digitised their products and services prove that their growth is due to their adoption of data compression standards. DCT will offer other industries and communities the means to achieve the same goal with the best standards, compatible with other formats to avoid interoperability costs in an age of convergence, with reduced implementation costs because suppliers can serve a wide global market and with the necessary conformance testing and maintenance support.
  6. Data compression standards with cross-domain expertise. While the nature of “data” differs depending on the source of data, the MPEG experience has shown that compression expertise transfers well across domains. A good example is MPEG’s Genome Compression standard (ISO/IEC 23092), where MPEG compression experts work with domain experts, combining their respective expertise to produce a standard that is expected to be widely used by the genomic industry. This is the model that will ensure sustainability of a body of data compression experts while meeting the requirements of different industries.
  7. Proven track record, not a leap in the dark. MPEG has 1400 accredited experts, has produced 175 digital media-related standards used daily by billions of people and collaborates with other communities (currently genomics, point clouds and artificial intelligence) to develop non-audiovisual compression standards. Thirty years of successful projects prove that the MPEG-inspired method proposed for DCT works. DCT will have adequate governance and structure to handle relationships with many disparate client industries with specific needs and to develop data compression standards for each of them. With an expanding industry support, a track record, a solid organisation and governance, DCT will have the means to accomplish the mission of serving a broad range of industries and communities with its data compression standards.


According to the ISO directives these are the steps required to establish a new Technical Committee

  1. An ISO member body submits a proposal (done by Italy)
  2. THe ISO Central Secretariat releases a ballot (end of August)
  3. All ISO member bodies vote on the proposal (during 12 weeks)
  4. If ISO member bodies accept the proposal the matter is brought to the Technical Management Board (TMB)
  5. The TMB votes on the proposal (26 January 2029)

An (optimistic?) estimate for the process to end is spring 2019.

Send your comments to


Erasmus and migration

The apex of Renaissance was around the middle of the 15th-16th centuries. Learned men freely communicated with the feeling of belonging to a whole that was shared by their minds and by definition borderless.

No other man better symbolises the community of minds that hovered the geographical expression called Europe than Erasmus of Rotterdam.

Then came Martin Luther and decades of religion wars. Other wars sought to establish ever stronger national identities. The common language itself – Latin – still learnt, praised and practiced until recently – was gradually replaced by national languages.

A century and a half later the other side of the Atlantic saw a grand example of nation building: the United States of America. The borders of the new entity were fuzzy at best but, in case it was not clear to the ex-colonists, the occupation of Washington during the American-British War of 1812 reminded them that they had better have a Commander-in-Chief to deal with foreign powers. I am not sure I like the idea of a single person being able to decide what to do of those who set foot in the USA “illegally”, but there is no doubt that all the facets of that power have played a major role in making the USA the power that it is today.

Another century and a half later the extreme eastern end of Europe saw another grand example of nation building: the Union of the Soviet Socialist Republics. Over the centuries the czars of Russia had tried to bring the higher classes of their empire closer to the more and more fractured community of minds that Europe had become. The czarist empire knew very well what borders were and indeed over the centuries Russia had become a huge multi-ethnic and multi-continental empire. Given the conditions of the moment, the czars’ successors took a minimalist approach to their country’s borders only to revert to expansionism when favourable (so to speak) conditions returned.

Fifty years later Europe saw another – so far – grand example of nation building: the European Union. Driven by a handful of visionaries who had learnt from 15 centuries of wars, and particularly from the world wars, they put in place a process that, starting from economic integration, aspired to achieve higher goals.

Clearly Europe has been built taking the Europe of Erasmus as a model. For decades Europe was a notion where citizens belonged to countries that had a very strong rooting on their territories but shared an Erasmus-like common ideal that would eventually coverthe entire geographical Europe.

This noble plan has worked for a while. For decades students in Europe felt and behaved like Erasmus in the 15th century thanks to a program that, indeed, bears his name. Given time these young people would grow and become European citizens all feeling like members of a community like the learned men of five centuries ago.

Europe could have become the first example of a nation that, unlike all grand nations that had a border, only has intellectual borders. It is not going to happen because this noble plan is being crushed by a handful of migrants – in a population of half a billion people.

It would have been great to determine that you are Europen if you belong to the European community of minds, but now we must be able to determine that by some physical means, i.e. that “inside” you are European and “outside” you are foreigner. Alas, we need an old-fashioned physical “border” to save the ideal of a border-less continent-wide community.

Europeans should be able, as they do, to freely move inside the physical space called Europe where some foreigners will always find the way to get in. If the foreigners are admitted into the physical space we should strive to make them part of the continent-wide community.

That is a long term endeavour that starts from the moment foreigners enter the physical space called European Union. They should be taken in charge by the Europe Union, not by national states.

Caveat venditor

When I was a kid and was free from school, I used to help my mother in a market place close to our town where she run a bench.

Our primary task was to sell wares (of course). The task second in importance was to make sure that the wares on display did not “inadvertently” end up in the pockets of some onlookers. We, the sellers, applied the caveat venditor (let the seller beware) principle and bewared.

I happened to witness that this attitude is not proper only of my family or of those times. Throughout the years I visited market places in different parts of the world and I always saw sellers, no matter what was the local culture, behave in a similar cautious way.

Now I have a question. Article 13 of the current draft (2018/06/20) of the new European Copyright Directive aimed at “adapting EU copyright rules to the digital environment” requires an “upload filter” whose function is to check that everything uploaded online in the EU does not infringe somebody’s copyright.

What does this mean? If you run a website where your customers upload content, you have to check that your customers’ content does not infringe somebody else’s copyright.

Why on earth should one do this? If my mother and I watched over our wares, and millions of people in all latitudes and longitudes watch over theirs, why should copyright holders be exempted from watching over their (digital) wares?

My mother and I cavimus, millions of people cavent, copyright holders caveant.

There are plenty of inexpensive technologies that allow copyright holders to watch over their content without putting gratuitous burdens on the shoulders of people who are just doing their own work.

30 years of MPEG, and counting?


Thirty years ago this day, in Ottawa, ON, some 29 experts from 6 countries attended the 1st meeting of the Moving Picture Experts Group, to become universally known as MPEG. Twenty-five days ago, in San Diego, CA, 20 times the MPEG experts of the 1st meeting attended the 122nd MPEG meeting.

These 30 years have been an incredible ride.

MPEG’s mission is to produce digital media standards and MPEG did it through without exemption. Here are some facts

  • MPEG has been engaged in 21 work items (SO language for “standardisation areas”);
  • In one case the work item produced just one standard but on the other extreme MPEG-4 counts 34 standards;
  • MPEG has produced a total of 174 standards or an average of ~6 standards/year and is working on a few tens more;
  • Some MPEG standards contain a few tens of pages, some others several hundreds and, in a few cases, over 1000 pages;
  • MPEG has produced several hundred standards amendments (ISO language for “extensions”);
  • Some standards have been published only once, some others a few times and the Advanced Video Coding standard (AVC) 8 times (and a 9th edition is in preparation).

These numbers may look impressive, but have to be assessed in a context. The Joint ISO/IEC Technical Committee 1 (JTC 1), to which MPEG belongs, counts more than 100 working groups, MPEG, with just 1/10 of all JTC 1 experts, produces 10 times more standards than the average JTC 1 working group.

Clearly MPEG has done a lot in the past 30 years, but what about the current level of activity? In the last 30 months (i.e. in the last 10 meetings), MPEG has been working on more than 200 “tracks” (by track I mean an activity that develops working drafts, standards or amendments).

One reason of the interest aroused by MPEG standards is MPEG’s practice to  communicate its plans to, collect requirements from and share results with some 50 different bodies who work on related areas. It also offers – and receives – collaboration from other ISO and ITU-T groups on specific standards.

Publishing standards – like writing books – is one measure of productivity. Not unlike a book, however, it does not help if a standard stays in the shelves of the ISO Central Secretariat. Therefore, to be sure that MPEG has meaningfully accomplished its mission, we must make sure that its standards are used in products, services and applications.

Are all 174 MPEG standards widely used? No. As much as some products of a company sell like hot cakes and other stay in the company stores, some MPEG standards are widely used and some others only to some extent.

“Widely”, however, is an analogue measure. A better, digital, measure is “billion” that applies to a number of MPEG standards:

  • MPEG-1 Video was the first standard to cross the level of 1 billion users (Video CD players);
  • MPEG-1 Audio layer 2 is present even today in most TV set top boxes;
  • MPEG-1 Audio layer 3 (aka MP3) has been in use for the last 20 years, in portable audio players and now in all handsets and PCs;
  • MPEG-2 is used in all television set top boxes, DVDs and BluRay;
  • MPEG-4 AAC and AVC are standard in TV set top boxes since more than 10 years, mobile handsets, BluRays and PCs;
  • The MPEG file format is used every time a video is stored on or transmitted to a mobile handset, so even “billion” may not be the right measure…

Some other MPEG standards are used more “moderately” and for these the unit of measure is just “hundred million”. This is the case for e.g. MPEG-H for new generation broadcasting and DASH for internet streaming.

Such an intense use of MPEG standards explains the many amendments and editions and the “longevity” of some MPEG standards: extensions are still made to MPEG-2 Systems (after 24 years). MPEG file format (after 19 years), AVC (after 15 years) and so on.

Are you surprised to know that MPEG has received 5 (five) Emmy Awards?

Another thirty years await MPEG, if some mindless industry elements will not get in the way.

The MPEG machine is ready to start (again)

MPEG has developed standards that are used daily by billions of people. A non exhaustive list includes MP3, Advanced Audio Coding (AAC), MPEG-2 Systems and Video, Advanced Video Coding (AVC), MP4 File Format, DASH. Other MPEG standards were widely used in the past but their use is gradually fading out, such MPEG-1 Video, and MPEG-1 Audio Layer 2.

This because MPEG is always ready to exploit the latest technology innovations to create new standards that offer new advantages because they outperform previous generations.

At the San Diego, CA meeting JVET, a joint MPEG (ISO) and VCEG (ITU) working goup tasked with the development of a new video compression standard, received 46 proposals from 32 organisations in response to the Call for Proposals issued in October 2017.

The target of the new standard is to achieve a compression ratio leading to a bitrate reduction of at least 50% that of HEVC, including High Dynamic Range (HDR), in addition to providing native support to such emerging applications as 360° omni-directional video.

I am really so proud to say that the tests showed that several proposals in many instances already exceeded 40% bitrate reduction compared to HEVC. Considering the technology power of the MPEG machine and that there are 30 months to the expected time of approval of the new standard (October 2020), there is no doubt that Versatile Video Coding (VVC), the name of the new video coding standard, will reach and probably exceed the target.

MPEG is always working to provide new abd better benefits for more humans on the Earth. In the case of VVC, those still disadvantaged in delivery infrastructure will be able to access services from which they were excluded so far. Others will be able to enjoy more involving experiences.

Of course this will only happen if mindless industry elements will not blow the opportunity again.

IP counting or revenue counting?

In a now distant­ past companies used to be run by engineers. If a company had the right product – backed by good research and developed, indeed, by engineers – people would buy it. Then having a good product was not sufficient and many companies decided that the company had to be run by marketers. Then having a good marketing was not sufficient and so many companies decided that accountants should run them. Eventually many companies were run by lawyers because compliance became the priority. I do not have examples yet (we already have some from politics) , but I expect that soon companies will be run by actors and people will buy products of a company whose brand has been “sold” by a good actor.

I do not think this is the right approach. Of course we do not want as CEOs engineers who, like hammers, sees everything as a nail, or marketers who could not care less of what is inside provided they “feels” the packaging, or accountants whose sole purpose in life is the next quarterly report or lawyers who see everything in terms of compliance or actors who impersonates the company as it if were Othello or Desdemona.

I think CEOs should be the synthesis of all this. CEOs should be able to integratie the functions inside their companies overcoming the downside of sectorial MBOs.

Unfortunately reality seldom matches my beliefs.

Strictly speaking this is “someone else’s problem” (my company is small and I run all the functions I mentioned), but this situation has a strong impact on my alias, the MPEG convenor. The impact is not on the quality of the standards MPEG produces but on their viability when the standards leave the committee.

In MPEG the driving force are researchers: engineers, computer scientists, University professors, entrepreneurs and more. They love talking with their peers as if they were at conferences. Actually, I know they have better feelings because, unlike conferences, they can have memorable battles with their peers and hopefully have their ideas accepted in a standard.

Researchers typically work hand-in-hand with their companies’ IP attorneys. Both are typically rewarded on the basis of “number of patents in standards”. Although not the general rule, the IP attorney position is peculiar in that very often they are close to the CEO who is certainly attracted by the prospect of counting the dollars flowing through future royalties.

But the player in the company who is really concerned by standards is the product department. They need standards, but actually they do not care very much if they contain IP contributed by company researchers.

Unfortunately many CEOs see the product department’s role very much like the captain of a 19th century steam ship saw stokers: they take it for granted that things run. Therefore the value of products as such and its dependence on coal – I mean standards – is not properly represented to the eyes of the CEOs. Product departments are certainly happy to see that their company has contributed a lot of IP to the standards they need, but they would be much happier if those standards were also usable.

CEOs should open their ears not only to their IP and research department heads but also to their product department heads because it is the products leaving the latter that generate the revenues. Awarding “bigger bonus” for “more patents” is good, but it can become evil if the award is not connected to the actual use of the standards containing the IP.

This is not theory, it is a sad reality for one of the most important MPEG standards: High Efficiency Video Coding (HEVC). 63 months (5 years and 3 months!) after its approval, the licencing of HEVC (that is outside of MPEG purview) can be described as follows: ~1/3 of the ~45 patents holders have not published the licence of their IP and ~2/3 have joined one of the 3 existing patent pools, only two of which have published the licence of the IP they administer.

MPEG is embarking in a new video compression standard, even though the licensing of the previous HEVC standard is in the state I have described. CEOs have better not to behave like 19th century captains. The stokers are doing their work but there are more people whose actions have to be reined in.

Business model based ISO/IEC standards

Some readers of this blog may not remember – or be aware of what was – the world of media standards before MPEG came to the fore. Thirty years ago competences were scattered in ISO, IEC and ITU in ways that may appear illogical today but responded to the logic “I and my industry peers gather in one place and develop standards for our own needs”. For instance, there was a committee in charge of audio recording, another for cinematography and yet another for telephony speech, not to say about photography, set top boxes and more. You can get a complete picture here.

The logic driving the licensing of the necessary technology was a consequence of this mindset. A company that had developed and marketed a successful product would bring the specification to the appropriate committee, get a stamp on it and license the technology to all companies wishing to practice the standard. As everybody “spoke the same language” the licence would naturally be configured in the way all practicing entities would expect it to be.

Then came MPEG and all these committees either disappeared or got reconfigured. The scenario of many committees developing “many vertical media standards” was replaced by a single committee – MPEG – developing “single horizontal media standards”.

Of course, after a quarter of century of technology and (partly) industry convergence, no one would think of creating standards for such old-style “verticals”.

The fact that the MPEG-2 standard served the specific digital television industry and that most patent holders would actually practise it explains how it was possible to create the MPEG-2 Video (and Systems) technology and license it to those digital television users.

Problems started to appear some 20 years ago after MPEG-4 Visual (ISO/IEC 14496-2). The licence developed for that standard charged those streaming MPEG-4 Visual content for the duration of content streamed. The “IT industry” refused those licensing terms because they did not suit their business model. In response to this void a number of companies offered – with alternate fortunes – video streaming and other IT services.

The MPEG-4 AVC standard (ISO/IEC 14496-10) fared definitely better because the AVC licence corrected the terms considered most outrageous by the IT industry while still satisfying the needs of the broadcasting and consumer electronics industries.

In hindsight we should have expected that the licensing of the HEVC technology would find the difficulties we know. More than 15 years after MPEG-4 Visual and AVC, digital video is a technology used by many disparate industries. Still we are not back to the time of vertical standards as 30 years ago because the roles of the industries are now mapped to the layers of an ISO/OSI (or equivalent) model, not to separate non-communicating mutually agnostic silos.

The business models of the different industries generate different needs but in MPEG they are forced to develop standards according to a single business model imposed  by the ISO/IEC/ITU patent policy. Here is a summary of it (for ISO/IEC)

  1. Companies who believe they have Intellectual Property (IP) in an ISO/IEC standard should file a declaration with the ISO and IEC secretariats declaring their intention to license their IP for free (Option 1), FRAND (Option 2) or not license it at all (Option 3);
  2. IP holders are not required to identify the patents and the specific claims in the patents;
  3. ISO/IEC do not take position on those declarations but simply record them;
  4. Committees developing standards are not allowed to assess patent declarations, they have to comply with them;
  5. Licensing of ISO/IEC standards shall be developed outside ISO/IEC.

I believe that item 5 of the patent policy should remain untouched and so should probably be items 4 and 3.  But item 2 prevents MPEG from taking corrective measures if ISO/IEC receive an Option 2 patent declaration against a standard that is intended to be “royalty free” or an Option 3 declaration against a standard that is intended to be FRAND, if the declarations fail to identify the claimed infringed technology.

The bottom line is that ISO, instead of siding with one of its committees developing a standard sides to satisfy a legitimate business model, sides with reticent patent holders.

ISO/IEC should allow the development of international standards that satisfy a business model that a committee freely adopts. This requires that the committee should have the freedom to remove patents from a standard that a third party does not wish to be used in support of the committee’s business model.

I do not have to say that a committee that adopts a business model is not developing a licence. Indeed we are simply recreating the situation of 30 years ago when each committee operated according to the shared business models of the industries populating it.

Can MPEG overcome its Video “crisis”?

In my earlier post I described the “crisis” and how it was created, and hinted at possible ways to solve this crisis and avoid possible other future crises in other areas. As I remain skeptical that the crisis will be overcome, I want to remove any doubt about who should be blamed for the failure, certainly not ISO and not MPEG.

About ISO (and IEC)

The International Organisation for Standardisation (ISO), is an international non-intergovernmental organisation, made up of members from the national standards bodies of 162 countries (as of today). This is a summary of the ISO organisational structure

  1. The General Assembly is the ultimate authority of ISO and meets once a year.
  2. The Council is the core ISO governance body made up of 20 member bodies and other officers. It reports to the General Assembly and meets three times a year.
  3. The Technical Management Board (TMB) manages the ISO technical work and is responsible for the Technical Committees (TC). The TMB reports to Council.
  4. Technical Committees are in charge of developing standards. Nominally there are 314 TCs, but some are inactive or disbanded. Of particular relevance is the Joint ISO/IEC Technical Committee 1 (JTC 1) on Information Technologies established in 1987 by combining relevant activities in ISO TC 97 Information processing systems and in the International Electrotechnical Commission (IEC). JTC 1 is the largest TC in ISO and by itself manages ~1/3 of all ISO standardisation activities. JTC 1 is organised in Subcommittees, the latest of which is SC 42 Artificial Intelligence. Some SCs have been disbanded.

The size and importance of ISO require rules that are contained in the ISO/IEC Directives that all entities in ISO are bound to follow. These are periodically reviewed.

About MPEG

The Moving Picture Experts Group was created in 1988 as an Experts Group of Working Group 8 of JTC 1/SC 2 then called Character Sets and Information Coding. MPEG operatedin parallel to the Joint ISO/ITU-T Picture Experts Group (JPEG). In 1991 JTC 1/SC 29 Coding of audio, picture, multimedia and hypermedia information was created and MPEG became its WG 11 “Coding of Moving Pictures and Audio”.

MPEG develops highly sophisticated nature of MPEG standards and the typical attendance at its quarterly meetings is 400-500 experts. Therefore MPEG is organised in Subgroups: Requirements (what MPEG standards should do), Systems (media system level standards), Video (video coding standards), JCT-VC (HEVC standard), JVET (future video coding standard), 3DG (3D graphics coding standards), Tests (testing the quality of MPEG standards) and Communication (promotion of MPEG standards). JCT-VC and JVET are joint with ITU-T SG 16 Q6.

This is a unique organisation in ISO, but it exists because standards for media systems require a strong interaction of its components. Because of this MPEG holds several joint meetings where relevant subgroups discuss and agree on matters of common interests. The ability to develop complex digital media standards is one of the reasons of the success of MPEG standards in the market. Breaking up MPEG would deal a fatal blow to the validity of MPEG standard.

There is a fundamental operational difference between ISO Working Groups on one side, and Subcommittees and Technical Committees on the other. SC and TC decisions are based on national votes (where e.g. Luxembourg and United States have one vote each), WG technical decisions are made by consensus. Far from being a constraint, consensus-based working ensures that MPEG standards are technically sound.

About MPEG standards

As stated in my earlier post MPEG has been developing standards having the best performance as a goal, irrespective of the IPR involved. This approach has produced the best technical – and usable – video coding standards until AVC. No longer so with HEVC. It is not that there are many more patent holders in HEVC than in AVC, but the patent pool creation mechanism seems no longer able to deliver results.

In my earlier post I have provided some ideas on how the MPEG standard development process can be adapted to deliver standards in a form that facilitates the development of licences. None of these ideas can even remotely disadvantage proponents of good video coding technology. The point is that decisions in the MPEG working group are to be made by consensus. So it is entirely in the hands of MPEG members (and also ITU-T, in this case) to agree on an effective way of streamlining the MPEG video coding standard development process.

Unfortunately this is only the tip of the iceberg. The fact that almost the same companies have been unable to agree on an HEVC licence when 12 years before they had been able to agree on an AVC licence shows that the patent holder environment is degrading. The result is that the nice “ISO consensus” practice may no longer ensure that Option 2 standards can be developed and the MPEG experience proves that Option 1 standards cannot be developed.

Of course the work of developing technical standards must be done using the “ISO consensus” practice but MPEG must be able to access the high layers of the ISO hierarchy without shields. Even if there were no other good reasons, this should be achieved because MPEG is the largest working group in ISO, larger than most if not all JTC 1 Subcommittees and of most ISO Technical Committees, producing more standards than most ISO entities.


The reader should not think that I am rasing the “MPEG TC” issue because of an ill-conceived desire for “promotion”. Over the last 30 years I have been approached by several National Body representatives who asked me to raise the MPEG status in ISO. I always declined because at the time I thought that the “MPEG WG” status was just OK.

Not acting on those proposals was my mistake.

A crisis, the causes and a solution

Why this post?

Because there are rumours spreading about a presumed “MPEG-Video collapse” and Brownian motion-like initiatives trying to remedy – in some cases by the very people who have contributed to creating the “crisis”.

Who is the author of this post?

Leonardo Chiariglione, the founder and chairman of MPEG, but I am writing in a personal capacity.

Why is MPEG important?

In its 30 years of operation MPEG has created digital media standards that have enabled the birth and continue promoting the growth of digital media products, services and applications. Here are a few, out of close to 180 standards: MP3 for digital music (1992), MPEG-2 for digital television (1994), MPEG-4 Visual for video on internet (1998), MP4 file format for mobile handsets (2001), AVC for reduced bitrate video (2003), DASH for internet streaming (2013), MMT for IP broadcasting (2013) and more. In other words, MPEG standards have had and keep on having an impact on the lives of billions of people.

How could MPEG achieve this?

Thanks to its “business model” that can be simply described as: produce standards having the best performance as a goal, irrespective of the IPR involved. Because MPEG standards are the best in the market and have an international standard status, manufacturers/service providers get a global market of digital media products, services and applications, and end users can seamless communicate with billions of people and access millions of services. Patent holders who allow use of their patents get hefty royalties with which they can develop new technologies for the next generation of MPEG standards. A virtuous cycle everybody benefits from.

Why is there a “crisis”?

Good stories have an end, so the MPEG business model could not last forever. Over the years proprietary and “royalty free” products have emerged but have not been able to dent the success of MPEG standards. More importantly IP holders – often companies not interested in exploiting MPEG standards, so called Non Practicing Entities (NPE) – have become more and more aggressive in extracting value from their IP.

I saw the danger coming and designed a strategy for it. This would create two tracks in MPEG: one track producing royalty free standards (Option 1, in ISO language) and the other the traditional Fair Reasonable and Non Discriminatory (FRAND) standards (Option 2, in ISO language). Option 1 standards, obviously less performing than Option 2 ones, would counter the “proprietary” threat and provide an incentive to produce even more effective Option 2 standards while keeping at bay excessive claims by patent holders because of the incoming Option 1 standards “competition”.

The Internet Video Coding (IVC) standard was a successful implementation of the idea – kind of. Indeed, a few years after approving AVC, IVC was found to perform better than AVC. Unfortunately 3 companies made blank Option 2 statements (of the kind “I may have patents and I am willing to license them at FRAND terms”), a possibility that ISO allows. MPEG had no means to remove the claimed infringing technologies, if any, and IVC is practically dead.

In 2013 MPEG approved the HEVC standard which provides the same quality as AVC at half the bitrate. The licensing situation is depicted in the picture below (courtesy of Jonathan Samuelsson of Divideon): of ~45 patent holders ~1/3 published their licences  and 2/3 have joined one of the 3 patent pools, one of which has not published their licence.

 I saw the threat coming and one year ago I tried to bring the matter to the attention of the higher layers in ISO. My attempts were thwarted by a handful of NPEs.

Alliance for Open Media (AOM) has occupied the void created by MPEG’s outdated (but still largely used) video compression standard (AVC), absence of competitive Options 1 standards (IVC) and unusable modern standard (HEVC). AOM’s AV1 codec, due to be released soon, is claimed to perform better than HEVC and is said to be offered royalty free.

At long last everybody realises that the old MPEG business model is now broke, all the investments (collectively hundreds of millions USD) made by the industry for the new video codec will go up in smoke and AOM’s royalty free model will spread to other business segments as well.

Can something be done?

The situation can be described as tragic. This does not mean that there is nothing left to do. I personally doubt that something will be done, though, seeing how blindfolded the industry is. As I like to say, God blinds those He wants to lose.

The first action is to introduce what I call “fractional options”. As I said, ISO envisages two forms of licensing: Option 1, i.e. royalty free and Option 2, i.e. FRAND, which is taken to mean “with undetermined licence”. We could introduce fractional options in the sense that proposers could indicate that their technologies be assigned to specifically identified profiles with an “industry licence” (defined outside MPEG) that does not contain monetary values. For instance, one such licence could be “no charge” (i.e. Option 1), another could be targeted to the OTT market etc.

The second action, not meant to be alternative to the first, is to streamline the MPEG standard development process. Within this a first goal is to develop coding tools with “clear ownership”, unlike today’s tools which are often the result of contributions with possibly very different weights. A second goal is not to define profiles in MPEG. A third goal could be to embed in the standard the capability to switch coding tools on and off.

The work of patent pools would be greatly simplified because they could define profiles with technologies that are “available” because they would know who owns which tools. Users could switch on tools once they become usable, e.g. because the relevant owner has joined a patent pool.

These are just examples of how the MPEG standard development process can be adapted to better match the needs of entities developing licences and without becoming part – God (ISO) forbid – of a licence definition process.

Is this enough?

Even if industry will decide to get their acts together and put a patch to MPEG’s business model, it is easy to anticipate that the next threat is just around the corner. But MPEG cannot have a future if it is going to pass from crisis to crisis, each of which has an inevitable “cost”.

MPEG’s problem – so far a blessing – is that it is a working group, the lowest organisational structure in ISO. MPEG’s governance is weak: if there is a need, like it happened recently, to bring problems to the attention of the decision making layers in ISO, it is necessary to cross several layers of other committees with completely different priorities and concerns, each time asking for their approval and each time getting a diluted message. Only by becoming a Technical Committee can MPEG, the forerunner of problems that other committees will experience in the years to come, stay competitive in the market.

End of the world as we know it?

The reader should not think that I am personal concerned by all this, if not intellectually. I have been running MPEG for the last 30 years serving the industry – and billions of users – and I have been blessed by professional satisfactions that few have had enjoying the collaboration of thousands of experts each driven by their own motivations, but united in their desire to make the best standards. If MPEG ends now it will be a pity, but if this is the decision of the stakeholders – the industry in MPEG – so be it.

My concerns are at a different level and have to do with the way industry at large will be able to access innovation. AOM will certainly give much needed stability to the video codec market but this will come at the cost of reduced if not entirely halted technical progress. There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.

Companies will slash their video compression technology investments, thousands of jobs will go and millions of USD of funding to universities will be cut. A successful “access technology at no cost” model will spread to other fields.

So don’t expect that in the future you will see the progress in video compression technology that we have seen in the past 30 years.

Bitrate vs previous 25% less 25% less 30% less 60% less
Year of approval 1992 1994 1998 2003 2013