Life inside MPEG


In my earlier post 30 years of MPEG, and counting? I brought evidence of what MPEG has done in the last 30 years to create the broadcast, recording, web and mobile digital media world that we know. This document tries to make the picture more complete by looking at the current main activities and deliveries in the next few months/years.

MPEG at a glance

The figure below shows the main standards developed or under development by MPEG in the 2017-2023 period organised in 3 main sections:

  • Media Coding (e.g. MP3 and AVC)
  • Systems and Tools (e.g. MPEG-2 TS and File Format)
  • Beyond Media (currently Genome Compression and Neural Network Compression).

Video coding

In the Video coding area MPEG is currently handling 4 standards (MPEG-4, -H, -I and -CICP) and several Explorations.

MPEG-I ISO/IEC 23090 Coded representation of immersive media is the container of standards needed for the development of immersive media devices, applications and services.

MPEG is currently working on Part 3 Versatile Video Coding, the new video compression standard after HEVC. VVC is developed jointly with VCEG and is expected to reach FDIS stage in October 2020.


  1. An exploration on a new video coding standard that combines coding efficiency (similar to that of HEVC), complexity (suitable for real time encoding and decoding) and usability (timely availability of licensing terms). In October 2018 a Call for Proposals was issued. Submissions are due in January 2019 and FDIS stage for the new standard is expected to be reached in January 2020.
  2. An exploration on a future standard that defines a data stream structure composed by two streams: a base stream decodable by a hardware decoder, and an enhancement stream suitable for software processing implementation with sustainable power consumption. This activity is supported by more than 30 major international players in the video distribution business. A Call for Proposal has been issued in October 2018. Submissions are due in March 2019 and FDIS stage is expected to be reached in April 2020.
  3. Several explorations on Immersive Video Coding
    • 3DoF+ Visual: a Call for Proposal will be issued in January 2019. Submissions are due in March 2019 and the FDIS is planned for July 2020. The result of this activity is not meant to be a video coding standard but a set of metadata that can be used to provide a more realistic user experience in OMAF v2. Indeed, 3DoF+ Visual will be a part of MPEG-I part 7 Immersive Media Metadata. Note that 3 Degrees of Freedom (3DoF) means that a user can only make yaw, pitch, roll movements, but 3DoF+ means that the user can also displace the head to a limited extent.
    • Several longer-term explorations on compression of 6DoF visual (Windowed-6DoF and Omnidirectional 6DoF) and Compression of Dense Representation of Light Fields. No firm timeline for standards in these areas has been set.

Audio coding

In the Audio coding area MPEG is handling 3 standards (MPEG-4, -D, and -I). Of particular relevance is the MPEG-I part 3 Immersive Audio activity. This is built upon MPEG-H 3D Audio – which already supports a 3DoF user experience – and will provide a 6DoF immersive audio VR experience. A Call for Proposal will be issued in March 2019. Submissions are expected in January 2020 and FDIS stage is expected to be reached in April 2021. As for 3DoF+ Visual, this standard will not be about compression, but about metadata.

3D Graphics coding

In the 3D Graphics coding area MPEG is handling two parts of MPEG-I.

  • Video-based Point Cloud Compression (V-PCC) for which FDIS stage is planned to be reached in October 2019. It must be noted that in July 2018 an activity was initiated to develop standard technology for integration of a 360 video and V-PCC objects.
  • Geometry-based Point Cloud Compression (G-PCC) for which FDIS stage is planned to be reached in January 2020.

The two PCC standards employ different technologies and target different application areas: entertainment and automotive/unmanned aerial vehicles, respectively.

Font coding

In the Font coding area MPEG is working on MPEG-4 part 22.

MPEG-4 ISO/IEC 14496 Coding of audio-visual objects is a 34-part standard that made possible large scale use of media on the fixed and mobile web.

Amendment 1 to Open Font Format will support complex layouts and new layout features. FDAM stage will be reached in April 2020.

Genome coding

MPEG-G ISO/IEC 23092 Genomic Information Representation is the standard developed in collaboration with TC 276 Biotechnology to compress files containing DNA reads from high speed sequencing machines.

In the Genome coding area MPEG plans to achieve FDIS stage for Part 2 Genomic Information Representation in January 2019. MPEG has started investigating additional genome coding areas that would benefit from standardisation.

Neural network coding

Neural network compression is an exploration motivated by the increasing use of neural networks in many applications that require the deployment of a particular trained network instance potentially to a large number of devices, which may have limited processing power and memory.

In the Neural network coding area MPEG has issued a Call for Evidence in July 2018, assessed the responses received in October 2018 and collected evidence that justifies the Call for Proposals issued at the same October 2018 meeting. The goal of the Call is to make MPEG aware of technologies to reduce the size of trained neural networks. Responses are due in March 2019. As it is likely that in the future more Calls will be issued for other functionality (e.g., incremental representation), an expected time for FDIS has not been identified yet.

Media description

Media description is the goal of the MPEG-7 standard which contains technologies for describing media, e.g. for the purpose of searching media.

In the Media description area MPEG has completed Part 15 Compact descriptors for video analysis (CDVA) in October 2018. By exploiting the temporal redundancy of video, CDVA extracts a single compact descriptor that represents a video clip rather than individual frames, which was the goal of Compact Descriptors for Visual search (CDVS).

Work in this area continues to complete reference software and conformance.

System support

In the System support area MPEG is working on MPEG-4, -B and -I. In MPEG-I MPEG is developing

  • Part 6 – Immersive Media Metrics which specifies the metrics and measurement framework to enhance the immersive media quality and experiences. 3DoF+ Visual metadata will be one component of this standard
  • Part 7 – Immersive Media Metadata which specifies common immersive media metadata focusing on immersive video (including 360° video), images, audio, and timed text.

Both parts are planned to reach FDIS stage in July 2020.

Intellectual Property Management and Protection

In the IPMP area MPEG is developing an amendment to support multiple keys per sample. FDAM stage is planned to be reached in March 2019. Note that IPMP is not about _defining_ but _employing_ security technologies to digital media.


In the Transport area MPEG is working on MPEG-2, -4, -B, -H, -DASH, -I, -G and Explorations.

MPEG-2 ISO/IEC 13818 Generic coding of moving pictures and associated audio information is the standard that enabled digital television.

Part 2 Systems continues to be an extremely lively area of work. After producing Edition 7, MPEG is working on two amendments to carry two different types of content

  • JPEG XS (a JPEG standard for low latency applications)
  • CMAF (an MPEG Application Format).


Part 12 ISO Based Media File Format is another extremely lively area of work. Worth mentioning are two amendments

  • Compact Sample-to-Group, new capabilities for tracks, and other improvements – has reached FDAM stage in October 2018
  • Box relative data addressing – is expected to reach FDAM in March 2019.

The 7th Edition of the MP4 file format is awaiting publication.

MPEG-B ISO/IEC 23001 MPEG systems technologies is a collection of systems standards that are not specific to a given standard.

In MPEG-B MPEG is working on two new standards

  • Part 14 Partial File Format will provide a standard mechanism to store HTTP entities and the partial file in broadcast applications for later cache population. The standard is planned to reach FDIS stage in April 2020.
  • Part 15 Carriage of Web Resources in ISOBMFF will make it possible to enrich audio/video content, as well as audio-only content, with synchronised, animated, interactive web data, including overlays. The standard is planned to reach FDIS stage in January 2019.

MPEG-H ISO/IEC 23008 High efficiency coding and media delivery in heterogeneous environments is a 15-part standard for audio-visual compression and heterogeneous delivery.

Part 10 MPEG Media Transport FEC Codes is being enhanced by the Window-based FEC code an amendment. FDAM is expected to be reached in January 2020.

MPEG-DASH ISO/IEC 23009 Dynamic adaptive streaming over HTTP (DASH) is the standard for media delivery on unpredictable-bitrate delivery channels.

In MPEG-DASH MPEG is working on

  • Part 1 Media presentation description and segment formats is being enhanced by the Device information and other extensions amendment. FDAM is planned to be reached in July 2019.
  • Part 7 Delivery of CMAF content with DASH contains guidelines recommending some of the most popular delivery schemes for CMAF content using DASH. TR is planned to be reached in March 2019


Part 2 Omnidirectional Media Format (OMAF) released in October 2017 is the first standard format for delivery of omnidirectional content. With OMAF 2nd Edition Interactivity support for OMAF, planned to reach FDIS in January 2020, MPEG is substantially extending the functionalities of OMAF. Future releases may add 3DoF+ functionalities.


MPEG plans to achieve FDIS stage for Part 1 Transport and Storage of Genomic Information in January 2019.

Application Formats

MPEG-A ISO/IEC 23000 Multimedia Application Formats is a suite of standards for combinations of MPEG and other standards (when there are no suitable MPEG standard for the purpose).

MPEG is working on two new standards

  • Part 21 Visual Identity Management Application Format will provide a framework for managing privacy of users appearing on pictures or videos shared among users. FDIS is expected to be reached in January 2019.
  • Part 22 Multi-Image Application Format (MIAF) will enable precise interoperability points when creating, reading, parsing and decoding images embedded in HEIF (MPEG-H part 12).

Application Programming Interfaces

The Application Programming Interfaces area comprises standards developed in order to make possible effective use of some MPEG standards.

In the API area MPEG is working on MPEG-I, MPEG-G and MPEG-IoMT.


MPEG is working on Part 8 Network-based Media Processing (NBMP), a framework that will allow users to describe media processing operations to be performed by the network. The standard is expected to reach FDIS stage in January 2020.


MPEG is working on Part 3 Genomic Information Metadata and Application Programming Interfaces (APIs). The standard is expected to reach FDIS stage in March 2019.

MPEG-IoMT ISO/IEC 23093 Internet of Media Things is a suite of standards supporting the notion of Media Thing (MThing), i.e. a thing able to sense and/or act on physical or virtual objects, possibly connected to form complex distributed systems called Internet of Media Things (IoMT) where MThings interact between them and humans.

In MPEG-IoMT MPEG is working on

  • Part 2 IoMT Discovery and Communication API
  • Part 3 IoMT Media Data Formats and API.

Both expected to reach FDIS stage in March 2019.

Media Systems

Media Systems includes standards or Technical Reports targeting architectures and frameworks.

In Media Systems MPEG is working on Part 1 IoMT Architecture, expected to reach FDIS stage in March 2019. The architecture used in this standard is compatible with the IoT architecture developed by JTC 1/SC 41.

Reference implementation

MPEG is working on the development of 10 standards for reference software of MPEG-4, -7, -B, -V, -H, -DASH, -G, -IoMT


MPEG is working on the development of 8 standards for conformance of MPEG-4, -7, -B, -V, -H, -DASH, -G, -IoMT.


Even this superficial overview should make evident to all the complexity of interactions in the MPEG ecosystem that has been ongoing for 30 years (note that the above only represents a part of what happened at the last meeting in Macau).

In 30 years of MPEG, and counting? I wrote “Another thirty years await MPEG, if some mindless industry elements will not get in the way”.

It might well have been a prophecy.

Data Compression Technologies – A FAQ

This post attempts to answer some of the most frequent questions received on the proposed ISO Data Compression Technologies Technical Committee (DCT TC). See here and here. If you have a question drop an email to

Q: What is the difference between MPEG and DCT?

A: Organisation-wise MPEG is a Working Group (WG), reporting to a Subcommittee (SC), reporting to a Technical Committee (TC), reporting to the Technical Management Board (TMB). Another important difference is that a WG makes decision by consensus, i.e. with a high standard of agreement – not necessarily unanimity – while a TC makes decisions by voting.

Content-wise DCT is MPEG with a much wider mandate than media compression involving many more industries than just media.

Q: What is special of DCT to need a Technical Committee?

A: Data compression is already a large industry even if one only looks at the single media industry. By serving more client industries, it will become even larger and more differentiated. Therefore, the data compression industry will require a body with the right attributes of governance and representation. As an example, data compression standards are “abstract” as they are collections of algorithms, and intellectual property intensive (today most patent declarations submitted to ISO are from MPEG). The authority to regulate these matters resides in ISO but the data compression industry will need an authoritative body advocating its needs because the process will only intensify in the future.

Q: Why do you think that a Technical Committee will work better than a Working Group?

A: With its 175 standards produced and its 1400 members, over 500 of which attend quarterly meetings, MPEG – a working group – has demonstrated that it can deliver revolutionising standards affecting a global industry. DCT intends to apply the MPEG successful business model to many other non-homogeneous industries. Working by consensus is technically healthy and intellectually rewarding. Consensus will continue to be the practice of DCT, but policy decisions affecting the global data compression industry often require a vote. Managing a growing membership (on average one member a day joins the group) requires more than an ad hoc organisation like today. The size of MPEG is 20-30 times the normal size of an ISO WG.

Q: Which are the industries that need data compression standards?

According to Forbes, 163 Zettabytes of data, i.e. 163 billion Terabytes, will be produced worldwide in 2025. Compression is required by the growing number of industries that are digitising their processes and need to store, transmit and process huge amounts of data.

The media industry will continue its demands for more compression and for more media. The industries that capture data from the real world with RADAR-like technologies (earth-bound and unmanned aerial vehicles, civil engineering, media etc.), genomics, healthcare (all sorts of data generated by machines sensing humans), automotive (in-vehicle and vehicle-to-vehicle communication, environment sensing etc.), industry 4.0 (data generated, exchanged and processed by machines), geographic information and more.

Q: What are the concrete benefits that DCT is expected to bring?

A: MPEG standards have provided the technical means for the unrelenting growth of the media industry over the last 30 years. This has happened because MPEG standards were 1) timely (i.e. the design is started in anticipation of a need), 2) technically excellent (i.e. MPEG standards are technically the best at a given time), lively (i.e. new features improving the standard are continuously added) and 4) innovative (i.e. a new MPEG standard is available when technology progress enables a significant delta in performance). DCT is expected to do the same for the other industries targeted by DCT.

Q: Why should DCT solutions be better than solutions that are individually developed by industries?

A: “Better” is a multi-faceted word and should be assessed from different angles: 1) technical: compression is an extremely sophisticated domain and MPEG has proved that relying on the best experts produces the best results; 2) ecosystem: implementors are motivated to make products if the market is large and sharing technologies means that implementors face a lower threshold to enter. ON the contrary a small isolated market may not provide enough motivation to implementors; 3) maintenance: an industry may well succeeds in assembling a group of experts to develop a standard. However, a living body like a standard needs constant attention that can no longer be provided if  the experts have left the committee; 4) interoperability: industry-specific data compression methods are dangerous when digital convergence drives data exchange across apparently unrelated industry segments because they create non-communicating islands.


Copyright, troglodytes and digital

We can only guess the feelings of the troglodyte that drew animals on the walls of the Altamira cave, but we have a rough idea of the feelings of the Latin poet Martial towards those who proclaimed his own works as theirs. We know even more precisely the feelings of the Italian 16th century poet Ariosto when he proposed to Duke Alfonso d’Este of Ferrara to share the proceeds that he would obtain from the fines imposed on those who copied the poet’s works.

In now remote analogue times, it was “easy” for the law to prosecute those who made unauthorised copies of a book. The arrival of the photocopier. however, forced the law to put a patch – the so-called photocopy tax. The arrival of the audio cassette made it even easier to make copies of music tracks and so another tax was levied, this time on blank cassettes. The arrival of MP3 made copying audio so easy that the world we knew (I mean, those who knew it), ended there.

The arrival of digital technology was truly a missed opportunity. Instead of solving the problem at the root by exploiting come digital technologies, already widely available at the time, patches were added to patches trying to hold together a situation that had been cracking on all sides for decades.

The last patch in line is the new proposed European copyright directive.

I need to make clear on where I stand. You may well dress up the topic with pompous words like “freedom of the internet”, but it is clear to me that those who make a profit by exposing thousands of snippets of newspaper articles would do well to get in touch first with newspaper publishers. I do not know Martial, but I’m sure Ariosto would have be fully with me on this. So I’m not against a new copyright directive.

I am concerned by the logical foundation of the draft European directive and, in particular, by Article 13. This requires that a filter be placed to check that any digital object uploaded online in the European Union does not violate the copyright of a third party. Instead of saying that uploading somebody else’s content without authorisation should not be done – something I have just said I could not agree more with – Article 13 prescribes a particular solution, i.e. that gateway be placed at a particular place in the value chain.

I happen to have different ideas and I will illustrate them with a personal anecdote. When I was a child and I was free from school, I helped my mother in a market close to our town where she ran a stall. Obviously our main task was to sell, but it was no less important to make sure that the goods on display did not “inadvertently” end up in someone’s pockets. We applied a principle that Martial would have called caveat venditor.

Please note that our attitude of prudence is not the custom of my family alone or of the times of my youth. Over the years I have visited markets in different parts of the world, as a buyer, and I noticed that sellers, regardless of local culture, all behave in the same cautious way.

Back to Article 13, what does this mean in practice? That if I manage a website where my clients upload content, I have the obligation to verify that such content does not violate someone else’s copyright.

But why should I do it? If my mother and I have kept an eye on our goods and millions of sellers on the markets in all latitudes and longitudes pay attention to theirs, why copyright holders should be exempt from the task of controlling their – digital, but who cares -goods?

My mother and I cavimus (bewared), millions of people cavent (beware), the copyright holders caveant (ought to beware).

Some might reply that copyright holders are not able to do that in a global market. That was certainly true before the internet. In the digital age, however, there are technologies that allow copyright holders to control their content without imposing gratuitous burdens on people who are just doing their jobs.

It is very clear to me that technology alone is hardly an answer and that it must be integrated with the law. It is also clear to me that such an integration may not be easy, but it is not a good reason to continue to handle technology as if we were the troglodyte of the Altamira cave.



It worked twice and will work again

On the 30th anniversary of the first MPEG meeting I wrote a paper that extolled the virtues of MPEG in terms productivity – in absolute and relative terms – and of actual standard adoption. In this paper I would like to expose the vision that has driven MPEG since its early days that explains and promises to continue its success.

Today it is hard to believe the state of television 30 years ago. Each country retained the freedom to define its own baseband and transmission standards for terrestrial, cable and satellite distribution. The baseband and distribution standards of package media (video cassettes, laser discs etc.) were “international” but controlled by a handful of companies (remember Betamax and VHS).

With digital technologies coming of age, everybody was getting ready for more of the same. The only expected difference was the presence of a new player – the telecom industry – who saw digital as the way to get into the video distribution business.

Figure 1 depicts the situation envisaged in each country or region or industry: continuing the analogue state of things, each industry would, as a matter of principle, have a different the digital baseband.

Figure 1 – Digital television seen with analogue eyes

The MPEG magician played a magic to the global media industry saying: look, we give you a single standard for the digital baseband that works for telecom, terrestrial, cable, satellite and package media distribution. There would be a lot to say – and to learn – about how a group of compression experts convinced a global industry worth hundreds of billion USD, but let’s simply say that it worked.

Figure 2 shows the effect of the MPEG magic and how different the digital television industry turned out to be than the one depicted in Figure 1: all industries shared the same media compression layer.

Figure 2 – The digital television distribution

Since 1994, when MPEG-2 was adopted, MPEG has managed the standards of the media compression layer for the 5 industries. An important side effect was the birth of a new “media compression layer industry” – global this time – partly from pieces of the old industry and partly from entirely new pieces.

This was only the beginning of the story because, in the mean time, internet had matured to a (kind of, at that time) broadband distribution infrastructure for the fixed and the mobile internet. MPEG took notice and developed the standards that would serve these new industries while still serving the old ones. The picture illustrates the new configuration of the industry which is largely the one that exists today.

Figure 3 – The digital media distribution

So the magic worked again. Looking back some 20 years ago the industries who were fancied by the MPEG magic have no reason to regret as they have seen constant growth supported by the best media compression standards:

  • Digital Media revenues amount to 126.4 B$ in 2018, steadily increasing over the last few years
  • Digital TV and video industry, including e.g. Netflix and Amazon, are expected to be worth 119.2 B$ in 2022, up from 64 B$ in 2017
  • Digital ad spending overtook TV ad spending in 2017 with a record spending of 209 B$ worldwide.

In another paper I made known that the Italian ISO member body has requested ISO to establish a Data Compression Technologies (DCT) Technical Committee (TC). That proposal represents an extension of the model described above and is represented in Figure 4 (the new industries mentioned are the likely first targets of the DCT TC).

Figure 4 – The data compression industry

The DCT TC will provide data compression standards for all industries in need of data compression to do their job better. The field of endeavour called “data compression” generates standard algorithms expressed by abstract languages like mathematical formulae or code snippets for implementation in software or silicon by a variety of application domains.

I look forward to the new MPEG magic played by the Data Compression Technologies Technical Committee to provide new records in sustained growth to new industries.

Compression standards for the data industries


In my post Compression – the technology for the digital age, I called data compression “the enabler of the evolving media-rich communication society that we value”. Indeed, data compression has freed the potential of digital technologies in facsimile, speech, photography, music, television, video on the web, on mobile, and more.

MPEG has been the main contributor to the stellar performance of the digital media industries: content, services and devices – hardware and software. Something new is brewing in MPEG because it is applying its compression toolkit to other non-media data such as point clouds and DNA reads from high speed sequencing machines, and plans on doing the same on neural networks.

Recently UNI – the Italian ISO member body – has submitted to ISO the proposal to create a Technical Committee on Data Compression Technologies (DCT, in the following) with the mandate to develop standards for data compression formats and related technologies to enable compact storage as well as inter-operable and efficient data interchange among users, applications and systems. MPEG activities, standards and brand should be transferred to DCT.

With its track record, MPEG has proved that it is possible to provide standard data compression technologies that are the best in their class at a given time to serve the needs of the digital media industries. The DCT proposal to extend the successful MPEG “horizontal standards” model to the “data industries” at large, of course retaining the media industries.

That giving all industries the means to enjoy the benefits of more data accessed and used by systematically applying standard data compression to all data types is not an option, but a necessity is proved by Forbes.  Estimates indicate that by 2025 the world will produce 163 Zettabytes of data. What will we do of those data, when today only 1% of the data created is actually processed?

Why Data Compression is important to all

Handling data is important for all industries: in some cases it is their raison d’être, in other cases it is crucial to achieve the goal and in still others data is the oil lubricating the gears.

Data appear in manifold scenarios: in some cases a few sources create huge amounts of continuous data, in other cases many sources create large amounts of data and in still others each of a very large number of sources creates small discontinuous chunks of data.

Common to all scenarios is the need to store, process and transmit data. For some industries, such as telecommunication and broadcasting, early adopters of digitisation, the need was apparent from the very beginning. For others the need is gradually becoming apparent now.

Let’s see in some representative examples why industries need data compression.

Telecommunication. Because of the nature of their business, telecommunication operators (telcos) have been the first to be affected by the need to reduce the size of digital data to provide better existing services and/or attractive new services. Today telcos are eager to make their networks available to new sources of data.

Broadcasting. Because of the constraints posed by the finite wireless spectrum on their ability to expand their quality and range of services, broadcasters have always welcome more data compression. They have moved from Standard Definition to High Definition then to Ultra High Definition and beyond (“8k”), but also to Virtual Reality. For each quantum step in the quality of service delivered, they have introduced new compression. More engaging future user experiences will require the ability to transmit or receive ever more data, and ever more types of data.

Public security. MPEG standards are already universally used to capture audio and video information for security or monitoring purposes. However, technology progress enables users to embed more capabilities in (audio and video) sensors, e.g. face recognition, counting  of people and vehicles etc., and the sharing of that information in a network of more and more intelligent sensors to drive actuators. New standard data compression technologies are needed to support the evolution of this domain.

Big Data. In terms of data volume, audio and video, e.g. those collected by security devices or vehicles, are probably the largest component of Big Data, as shown by the Cisco study forecasting that by 2021 video on the internet will account for more than 80% of total traffic. Moving such large amounts of information from source to the processing cloud in an economic fashion requires data compression and their processing requires standards that allow the data to be processed independently of the information source.

Artificial intelligence uses different types of neural networks, some of which are “large”, i.e. require many Gigabytes and require massive computational complexity. To practically move intelligence across networks, as required by many consumer and professional use scenarios, standard data compression technologies are needed. Compression of neural networks is not only a matter of bandwidth and storage memory, but also of power consumption, timeliness and usability of intelligence.

Healthcare. The healthcare world is already using genomics but many areas will greatly benefit by a 100-fold reduction of the size and the time to access the data of interest. In particular, compression will accelerate the coming-of-age of personalised medicine. As healthcare is often a public policy concern, standard data compression technologies are required.

Agriculture and Food. Almost anything related to agriculture and food has a genomic source. The ability to easily process genomic data thanks to compression opens enormous possibilities to have better agriculture and better food. To make sure that compressed data can be exchanged between users, standardised data compression technologies are required.

Automotive. Vehicles are becoming more and more intelligent devices that drive and control their movement by communicating with other vehicles and fixed entities, sensing the environment, and storing the data for future use (e.g., for assessing responsibilities in a car crash). Data compression technologies are required and, especially when security is involved, the technologies must be standard.

Industry 4.0. The 4th industrial revolution is characterised by “Connection between physical and digital systems, complex analysis through Big Data and real-time adaptation”. Collaborative robots and 3D printing, the latter also for consumer applications, are main components of Industry 4.0. Again, data compression technologies are vital to make Industry 4.0 fly and, to support multi-stakeholder scenarios, technologies should be standard.

Business documents. Business documents are becoming more diverse and include different types of media. Storage and transmission of business documents are a concern if bulky data are part of them. Standards data compression technologies are the principal way to reduce the size of business documents now and more so in the future.

Geographic information. Personal devices consume more and more geographic information and, to provide more engaging user experiences, the information itself is becoming “richer”, which typically means “heavier”. To manage the amount of data data compression technologies must be applied. Global deployment to consumers requires that the technologies be standard.

Blockchains and distributed ledgers enable a host of new applications. Distributed storage of information implies that more information is distributed and stored across the network, hence the need for data compression technologies. These new global distributed scenarios require that the technologies be standard.

Which Data Compression standards?

Data compression is needed if we want to be able to access all the information produced or available anywhere in the world. However, as the amount of data and the number of people accessing it grows, new generations of compression standards are needed. In the case of video, MPEG has already produced five generations of compression standards and one more is under development. In the case of audio, five generations of compression standards have already been produced, with the fifth incorporating extensive use of metadata to support personalisation of the user experience.

MPEG compression technologies have had, and continue to have, extraordinarily positive effects on a range of industries with billions of hardware devices and software applications that use standards for compressing and streaming audio, video, 3D graphics and associated metadata. The universally recognised MP3 and MP4 acronyms demonstrate the impact that data compression technologies have on consumer perception of digital devices and services over the world.

Non-inter-operable silos, however, are the biggest danger in this age of fast industry convergence that only international standards based on common data compression technologies can avoid. Point Clouds and Genomics show that data compression technologies can indeed be re-used for different data types from different industries. Managing different industry requirements is an art and MPEG has developed it over 30 years dealing with such industries as telecom, broadcasting, consumer electronics, IT, media content and service providers and, more recently, bioinformatics. DCT can safely take the challenge and do the same for more industries.

How to develop Data Compression standards?

As MPEG has done for the industries it already serves, DCT should only develop “generic” international standards for compression and coded representation of data and related metadata suitable for a variety of application domains so that the client communities can use them as components for integration in their systems.

The process adopted by MPEG should also be adopted by DCT, namely:

  • Identification of data compression requirements (jointly with the target industry)
  • Development of the data compression standard (in consultation with the target industry)
  • Verification that the standard satisfies the agreed requirements (jointly with the target industry)
  • Development of test suites and tools (in consultation with the target industry)
  • Maintenance of the standards (upon request of the target industry).

Data Compression is a very specialised field that many technical and business communities in specific domains are ill-equipped to master satisfactorily Even if an industry succeeds in attracting the necessary expertise, the following will likely happen

  1. The result is less than optimal compared to what could have been obtained from the best experts;
  2. The format developed is incompatible with other similar formats with unexpected inter-operability costs in an era of convergence;
  3. The implementation cost of the format is too high because an industry may be unable to offer sufficient returns to developers;
  4. Test suites and tools cannot be developed because a systematic approach cannot be improvised;
  5. The experts who have developed the standard are no longer around to ensure its maintenance.

Building the DCT work plan

The DCT work plan will be decided by the National Bodies joining it. However, the following is a reasonable estimate of what that workplan will be.

Data compression for Immersive Media. This is a major current MPEG project that currently comprises systems support to immersive media; video compression; metadata for immersive audio experiences; immersive media metrics; immersive media metadata and network-based media processing (NBMP). A standard for systems support (OMAF) has already been produced, a standard for NBMF is planned for 2019, a video standard in 2020 and an audio standard in 2021. After completing the existing work plan DCT should address the promising light field and audio field compression domains to enable really immersive user experiences.

Data compression for Point Clouds. This is a new, but already quite advanced area of work for MPEG. It makes use of established MPEG video and 3D graphics technologies to provide solutions for entertainment and other domains such as automotive. The first standard will be approved in 2019 but DCT will also work for new generations of point cloud compression standards for delivery in the early 2020s.

Data compression for Health Genomics. This is the first entirely non-media field addressed by MPEG. In October 2018 the first two parts – Transport and Compression – will be completed and the other 3 parts – API, Software and Conformance – will be released in 2019. The work is done in collaboration with ISO/ TC 276 Biotechnologies. Studies for a new generation of compression formats will start in 2019 and DCT will need to drive those studies to completion along with other data types generated by the “health” domain for which data compression standards can be envisaged.

Data compression for IoT. MPEG is already developing standards in the the specific “media” instance of IoT called “Internet of Media Things” (IoMT). This partly relies on the MPEG standard called MPEG-V – Sensors and Actuators Data Coding, which defines a data compression layer that can support different types of data from different types of “things”. The first generation of standards will be released in 2019. DCT will need to liaise with the relevant communities to drive the development of new generations of IoT compression standards.

Data compression for Neural Networks. Several MPEG standards are or will soon employ neural network technologies to implement certain functionalities. A “Call for Evidence” has been issued in July 2018 to get evidence of the state of compression technologies for neural networks after which a “Call for Proposals” will be issued to get the necessary technologies and develop a standard. End of 2021 can be the estimated time of the first neural network compression standard. However, DCT will need to investigate which other compression standards for this extremely dynamic field.

Data compression for Big Data. MPEG has already adapted the ISO Big Data reference model for its “Big Media Data” needs. Specific work has not begun yet and DCT will need to get the involvement of relevant communities, not just in the media domain.

Data compression for health devices. MPEG has considered the need for compression of data generated by mobile health sensors in wearable devices and smartphones to cope with their limited storage, computing, network connectivity and battery. DCT will need to get the involvement of relevant communities and develop data compression standards for heath devices that promote their effective use.

Data compression for Automotive. One of the point cloud compression use cases – efficient storage of the environment captured by sensors on a vehicle – is already supported the Point Cloud Compression standard under development. There are, however, many more types of data that are generated, stored and transmitted inside and outside of a vehicle for which data compression has positive effects. DCT can offer its expertise to the automotive domain to achieve new levels of efficiency, safety and comfort in vehicles.

The list above includes standards MPEG is already deeply engaged in or is already working on. However, the industries that can benefit from data compression standard is much broader than those mentioned above (see e.g., Why Data Compression is important to all) and the main role of DCT will be to actively investigate the data compression needs of industries, get in contact with them and jointly explore new opportunities for standard development.

Is DCT justified?

The purpose of DCT is to make accessible data compression standards, the key enabler of devices, services and application generating digital data, to industries and communities that do not have the necessary estremely specialised expertise to develop and maintain such standards on their own.

The following collects the key justifications for creating DCT:

  1. Data compression is an enabling technology for any digital data. Data compression has been a business enabler to media production and distribution, tecommunication, and Information and Communication Technologies (ICT) in general by reducing the cost of storing, processing and transmitting digital data. Therefore, data compression will also facilitate enhanced use of digital technologies to other industries that are undergoing – or completing – transition to digital. As it happened for media, by lowering the access threshold to business, in particular to SMEs, data compression will drive industries, to create new business models that will change the way companies generate, store, process, exchange and distribute data.
  2. Data compression standards trigger virtuous circles. By reducing the amount of data required for transmission, data compression will enable more industries to become digital. Being digital will generate more data and, in turn, further increase the need for data compression standards. Because compressed digital data are “liquid” and easily cross industry borders, “horizontal”. i.e. “generic” data compression standards are required.
  3. Data compression standards remove closed ecosystems bottlenecks. In closed environments, industry-specific data compression methods are possible. However, digital convergence is driving an increase in data exchange across industry segments. Therefore industry-specific standards will result in unacceptable bottlenecks caused by a lack of interoperability. Reliable, high-performance and fully-maintained data compression standards will help industries avoid the pitfalls of closed ecosystems that limit long-term growth potential.
  4. Sophisticated technology solutions for proven industry needs. Data compression is a highly sophisticated technology field with 50+ years of history. Creating efficient data compression standards requires a body of specialists that a single industry can ill afford to establish and, more importantly, maintain. DCT will ensure that the needs for specific data compression standards can always be satisfied by a body of experts who identify requirements with the target industries, develop standards, test for satisfactory support of requirements, produce testing tools and suites, and maintain the standards over the years.
  5. Data compression standards to keep the momentum growing . The industries that have most intensely digitised their products and services prove that their growth is due to their adoption of data compression standards. DCT will offer other industries and communities the means to achieve the same goal with the best standards, compatible with other formats to avoid interoperability costs in an age of convergence, with reduced implementation costs because suppliers can serve a wide global market and with the necessary conformance testing and maintenance support.
  6. Data compression standards with cross-domain expertise. While the nature of “data” differs depending on the source of data, the MPEG experience has shown that compression expertise transfers well across domains. A good example is MPEG’s Genome Compression standard (ISO/IEC 23092), where MPEG compression experts work with domain experts, combining their respective expertise to produce a standard that is expected to be widely used by the genomic industry. This is the model that will ensure sustainability of a body of data compression experts while meeting the requirements of different industries.
  7. Proven track record, not a leap in the dark. MPEG has 1400 accredited experts, has produced 175 digital media-related standards used daily by billions of people and collaborates with other communities (currently genomics, point clouds and artificial intelligence) to develop non-audiovisual compression standards. Thirty years of successful projects prove that the MPEG-inspired method proposed for DCT works. DCT will have adequate governance and structure to handle relationships with many disparate client industries with specific needs and to develop data compression standards for each of them. With an expanding industry support, a track record, a solid organisation and governance, DCT will have the means to accomplish the mission of serving a broad range of industries and communities with its data compression standards.


According to the ISO directives these are the steps required to establish a new Technical Committee

  1. An ISO member body submits a proposal (done by Italy)
  2. THe ISO Central Secretariat releases a ballot (end of August)
  3. All ISO member bodies vote on the proposal (during 12 weeks)
  4. If ISO member bodies accept the proposal the matter is brought to the Technical Management Board (TMB)
  5. The TMB votes on the proposal (26 January 2029)

An (optimistic?) estimate for the process to end is spring 2019.

Send your comments to


Erasmus and migration

The apex of Renaissance was around the middle of the 15th-16th centuries. Learned men freely communicated with the feeling of belonging to a whole that was shared by their minds and by definition borderless.

No other man better symbolises the community of minds that hovered the geographical expression called Europe than Erasmus of Rotterdam.

Then came Martin Luther and decades of religion wars. Other wars sought to establish ever stronger national identities. The common language itself – Latin – still learnt, praised and practiced until recently – was gradually replaced by national languages.

A century and a half later the other side of the Atlantic saw a grand example of nation building: the United States of America. The borders of the new entity were fuzzy at best but, in case it was not clear to the ex-colonists, the occupation of Washington during the American-British War of 1812 reminded them that they had better have a Commander-in-Chief to deal with foreign powers. I am not sure I like the idea of a single person being able to decide what to do of those who set foot in the USA “illegally”, but there is no doubt that all the facets of that power have played a major role in making the USA the power that it is today.

Another century and a half later the extreme eastern end of Europe saw another grand example of nation building: the Union of the Soviet Socialist Republics. Over the centuries the czars of Russia had tried to bring the higher classes of their empire closer to the more and more fractured community of minds that Europe had become. The czarist empire knew very well what borders were and indeed over the centuries Russia had become a huge multi-ethnic and multi-continental empire. Given the conditions of the moment, the czars’ successors took a minimalist approach to their country’s borders only to revert to expansionism when favourable (so to speak) conditions returned.

Fifty years later Europe saw another – so far – grand example of nation building: the European Union. Driven by a handful of visionaries who had learnt from 15 centuries of wars, and particularly from the world wars, they put in place a process that, starting from economic integration, aspired to achieve higher goals.

Clearly Europe has been built taking the Europe of Erasmus as a model. For decades Europe was a notion where citizens belonged to countries that had a very strong rooting on their territories but shared an Erasmus-like common ideal that would eventually coverthe entire geographical Europe.

This noble plan has worked for a while. For decades students in Europe felt and behaved like Erasmus in the 15th century thanks to a program that, indeed, bears his name. Given time these young people would grow and become European citizens all feeling like members of a community like the learned men of five centuries ago.

Europe could have become the first example of a nation that, unlike all grand nations that had a border, only has intellectual borders. It is not going to happen because this noble plan is being crushed by a handful of migrants – in a population of half a billion people.

It would have been great to determine that you are Europen if you belong to the European community of minds, but now we must be able to determine that by some physical means, i.e. that “inside” you are European and “outside” you are foreigner. Alas, we need an old-fashioned physical “border” to save the ideal of a border-less continent-wide community.

Europeans should be able, as they do, to freely move inside the physical space called Europe where some foreigners will always find the way to get in. If the foreigners are admitted into the physical space we should strive to make them part of the continent-wide community.

That is a long term endeavour that starts from the moment foreigners enter the physical space called European Union. They should be taken in charge by the Europe Union, not by national states.

Caveat venditor

When I was a kid and was free from school, I used to help my mother in a market place close to our town where she run a bench.

Our primary task was to sell wares (of course). The task second in importance was to make sure that the wares on display did not “inadvertently” end up in the pockets of some onlookers. We, the sellers, applied the caveat venditor (let the seller beware) principle and bewared.

I happened to witness that this attitude is not proper only of my family or of those times. Throughout the years I visited market places in different parts of the world and I always saw sellers, no matter what was the local culture, behave in a similar cautious way.

Now I have a question. Article 13 of the current draft (2018/06/20) of the new European Copyright Directive aimed at “adapting EU copyright rules to the digital environment” requires an “upload filter” whose function is to check that everything uploaded online in the EU does not infringe somebody’s copyright.

What does this mean? If you run a website where your customers upload content, you have to check that your customers’ content does not infringe somebody else’s copyright.

Why on earth should one do this? If my mother and I watched over our wares, and millions of people in all latitudes and longitudes watch over theirs, why should copyright holders be exempted from watching over their (digital) wares?

My mother and I cavimus, millions of people cavent, copyright holders caveant.

There are plenty of inexpensive technologies that allow copyright holders to watch over their content without putting gratuitous burdens on the shoulders of people who are just doing their own work.

30 years of MPEG, and counting?


Thirty years ago this day, in Ottawa, ON, some 29 experts from 6 countries attended the 1st meeting of the Moving Picture Experts Group, to become universally known as MPEG. Twenty-five days ago, in San Diego, CA, 20 times the MPEG experts of the 1st meeting attended the 122nd MPEG meeting.

These 30 years have been an incredible ride.

MPEG’s mission is to produce digital media standards and MPEG did it through without exemption. Here are some facts

  • MPEG has been engaged in 21 work items (SO language for “standardisation areas”);
  • In one case the work item produced just one standard but on the other extreme MPEG-4 counts 34 standards;
  • MPEG has produced a total of 174 standards or an average of ~6 standards/year and is working on a few tens more;
  • Some MPEG standards contain a few tens of pages, some others several hundreds and, in a few cases, over 1000 pages;
  • MPEG has produced several hundred standards amendments (ISO language for “extensions”);
  • Some standards have been published only once, some others a few times and the Advanced Video Coding standard (AVC) 8 times (and a 9th edition is in preparation).

These numbers may look impressive, but have to be assessed in a context. The Joint ISO/IEC Technical Committee 1 (JTC 1), to which MPEG belongs, counts more than 100 working groups, MPEG, with just 1/10 of all JTC 1 experts, produces 10 times more standards than the average JTC 1 working group.

Clearly MPEG has done a lot in the past 30 years, but what about the current level of activity? In the last 30 months (i.e. in the last 10 meetings), MPEG has been working on more than 200 “tracks” (by track I mean an activity that develops working drafts, standards or amendments).

One reason of the interest aroused by MPEG standards is MPEG’s practice to  communicate its plans to, collect requirements from and share results with some 50 different bodies who work on related areas. It also offers – and receives – collaboration from other ISO and ITU-T groups on specific standards.

Publishing standards – like writing books – is one measure of productivity. Not unlike a book, however, it does not help if a standard stays in the shelves of the ISO Central Secretariat. Therefore, to be sure that MPEG has meaningfully accomplished its mission, we must make sure that its standards are used in products, services and applications.

Are all 174 MPEG standards widely used? No. As much as some products of a company sell like hot cakes and other stay in the company stores, some MPEG standards are widely used and some others only to some extent.

“Widely”, however, is an analogue measure. A better, digital, measure is “billion” that applies to a number of MPEG standards:

  • MPEG-1 Video was the first standard to cross the level of 1 billion users (Video CD players);
  • MPEG-1 Audio layer 2 is present even today in most TV set top boxes;
  • MPEG-1 Audio layer 3 (aka MP3) has been in use for the last 20 years, in portable audio players and now in all handsets and PCs;
  • MPEG-2 is used in all television set top boxes, DVDs and BluRay;
  • MPEG-4 AAC and AVC are standard in TV set top boxes since more than 10 years, mobile handsets, BluRays and PCs;
  • The MPEG file format is used every time a video is stored on or transmitted to a mobile handset, so even “billion” may not be the right measure…

Some other MPEG standards are used more “moderately” and for these the unit of measure is just “hundred million”. This is the case for e.g. MPEG-H for new generation broadcasting and DASH for internet streaming.

Such an intense use of MPEG standards explains the many amendments and editions and the “longevity” of some MPEG standards: extensions are still made to MPEG-2 Systems (after 24 years). MPEG file format (after 19 years), AVC (after 15 years) and so on.

Are you surprised to know that MPEG has received 5 (five) Emmy Awards?

Another thirty years await MPEG, if some mindless industry elements will not get in the way.

The MPEG machine is ready to start (again)

MPEG has developed standards that are used daily by billions of people. A non exhaustive list includes MP3, Advanced Audio Coding (AAC), MPEG-2 Systems and Video, Advanced Video Coding (AVC), MP4 File Format, DASH. Other MPEG standards were widely used in the past but their use is gradually fading out, such MPEG-1 Video, and MPEG-1 Audio Layer 2.

This because MPEG is always ready to exploit the latest technology innovations to create new standards that offer new advantages because they outperform previous generations.

At the San Diego, CA meeting JVET, a joint MPEG (ISO) and VCEG (ITU) working goup tasked with the development of a new video compression standard, received 46 proposals from 32 organisations in response to the Call for Proposals issued in October 2017.

The target of the new standard is to achieve a compression ratio leading to a bitrate reduction of at least 50% that of HEVC, including High Dynamic Range (HDR), in addition to providing native support to such emerging applications as 360° omni-directional video.

I am really so proud to say that the tests showed that several proposals in many instances already exceeded 40% bitrate reduction compared to HEVC. Considering the technology power of the MPEG machine and that there are 30 months to the expected time of approval of the new standard (October 2020), there is no doubt that Versatile Video Coding (VVC), the name of the new video coding standard, will reach and probably exceed the target.

MPEG is always working to provide new abd better benefits for more humans on the Earth. In the case of VVC, those still disadvantaged in delivery infrastructure will be able to access services from which they were excluded so far. Others will be able to enjoy more involving experiences.

Of course this will only happen if mindless industry elements will not blow the opportunity again.

IP counting or revenue counting?

In a now distant­ past companies used to be run by engineers. If a company had the right product – backed by good research and developed, indeed, by engineers – people would buy it. Then having a good product was not sufficient and many companies decided that the company had to be run by marketers. Then having a good marketing was not sufficient and so many companies decided that accountants should run them. Eventually many companies were run by lawyers because compliance became the priority. I do not have examples yet (we already have some from politics) , but I expect that soon companies will be run by actors and people will buy products of a company whose brand has been “sold” by a good actor.

I do not think this is the right approach. Of course we do not want as CEOs engineers who, like hammers, sees everything as a nail, or marketers who could not care less of what is inside provided they “feels” the packaging, or accountants whose sole purpose in life is the next quarterly report or lawyers who see everything in terms of compliance or actors who impersonates the company as it if were Othello or Desdemona.

I think CEOs should be the synthesis of all this. CEOs should be able to integratie the functions inside their companies overcoming the downside of sectorial MBOs.

Unfortunately reality seldom matches my beliefs.

Strictly speaking this is “someone else’s problem” (my company is small and I run all the functions I mentioned), but this situation has a strong impact on my alias, the MPEG convenor. The impact is not on the quality of the standards MPEG produces but on their viability when the standards leave the committee.

In MPEG the driving force are researchers: engineers, computer scientists, University professors, entrepreneurs and more. They love talking with their peers as if they were at conferences. Actually, I know they have better feelings because, unlike conferences, they can have memorable battles with their peers and hopefully have their ideas accepted in a standard.

Researchers typically work hand-in-hand with their companies’ IP attorneys. Both are typically rewarded on the basis of “number of patents in standards”. Although not the general rule, the IP attorney position is peculiar in that very often they are close to the CEO who is certainly attracted by the prospect of counting the dollars flowing through future royalties.

But the player in the company who is really concerned by standards is the product department. They need standards, but actually they do not care very much if they contain IP contributed by company researchers.

Unfortunately many CEOs see the product department’s role very much like the captain of a 19th century steam ship saw stokers: they take it for granted that things run. Therefore the value of products as such and its dependence on coal – I mean standards – is not properly represented to the eyes of the CEOs. Product departments are certainly happy to see that their company has contributed a lot of IP to the standards they need, but they would be much happier if those standards were also usable.

CEOs should open their ears not only to their IP and research department heads but also to their product department heads because it is the products leaving the latter that generate the revenues. Awarding “bigger bonus” for “more patents” is good, but it can become evil if the award is not connected to the actual use of the standards containing the IP.

This is not theory, it is a sad reality for one of the most important MPEG standards: High Efficiency Video Coding (HEVC). 63 months (5 years and 3 months!) after its approval, the licencing of HEVC (that is outside of MPEG purview) can be described as follows: ~1/3 of the ~45 patents holders have not published the licence of their IP and ~2/3 have joined one of the 3 existing patent pools, only two of which have published the licence of the IP they administer.

MPEG is embarking in a new video compression standard, even though the licensing of the previous HEVC standard is in the state I have described. CEOs have better not to behave like 19th century captains. The stokers are doing their work but there are more people whose actions have to be reined in.