Understanding video formats and settings

What format is best for recording and saving your video? How will frame rate and compression settings such as Long GOP affect your possible recording time? Find out all about video formats and recording options.

In the same way that you can save a still image as a JPEG, a HEIF file or a RAW file, there are also multiple file format options for storing video files. Things are a little more complicated with video formats, however, because there are more variables. Here we'll explain the most common video formats and help you make sense of the related menu options available.

Unlike image files, video files have multiple components, including a codec and a container. The codec is a software layer for encoding and decoding the video data at recording and playback. This is the video counterpart of the compression algorithms used for image files. The majority of codecs are described as "lossy" because when the data is compressed to save space, some of the original video data is discarded in the process.

The container or wrapper bundles the video's picture and audio data together, along with subtitles and other metadata. The container behaves like a single file, and when people talk about video file formats this usually means container file formats, such as MP4 or XF-AVC. However, when you select a video format in the menu on your camera, you usually choose a container plus codec combination, such as MP4 (HEVC) or MP4 (H.264), not just MP4.

It is possible to save RAW video, and some cameras do offer this option, but RAW video files are huge – if you're filming at 25 frames per second, this means saving 25 RAW files for every second of video, which will really test your kit's data bandwidth and card write speeds, as well as filling up its storage space very quickly.

For this reason, just as with image files, various methods are used to reduce the size of video files and make them easier to handle. This includes different compression methods and colour sampling systems. Usually, you can choose between compression methods in your camera's menu under Movie recording size, but the colour sampling is determined by the file format settings. So let's look at the common video formats available, and then consider other settings that affect video file size.

A woman walks through autumnal woodland, carrying a Canon EOS C200 video camera at her side.

Whether you're new to shooting video or just unsure of the best settings to use for the job, a thorough understanding of video formats and options will help you make the best choices as you work.

What are the different formats for video?

These are the industry standard video formats available on modern Canon cameras (not all are available on every model):

  • MP4 (H.264): MP4 is a container file format, so you will see different MP4 variations. H.264 (or AVC) is the video compression codec most widely used for digital video today, particularly for streaming services, and these files can be played back on almost any device.
  • MP4 (HEVC): HEVC stands for High Efficiency Video Coding. This codec – also known as H.265 – offers 50% better compression efficiency than H.264, meaning it produces smaller files and requires lower bandwidth while streaming. It is also the first to support 8K resolution.
  • XF-AVC: This is a file format developed by Canon specifically for 4K DCI or 4K UHD footage. It's suitable for professional workflows, with creatives using it when recording high-resolution footage. The file name extension is .MXF.
  • RAW: Much as when you capture RAW stills, some Canon cameras enable you to shoot video in RAW. RAW files contain all the colour and tonal information and image detail captured by the sensor, which is hugely useful because it allows more headroom in editing. Videographers value it particularly for capturing the widest dynamic range in a scene, enabling them to make the most of both the highlight and shadow detail in post-production.
  • Cinema RAW Light: RAW format filming has huge advantages, but one challenge is that the file sizes are very large, which can impact on your workflow. A good solution is Canon's Cinema RAW Light, which was introduced with the release of the Canon EOS C200 video camera. This format delivers a vast dynamic range, but dramatically reduces the size of the files. The file name extension is .CRM.
  • MOV is an alternative container format, convenient for editing your footage on a computer. MOV files offer high quality, with excellent codec options including various lossy and lossless ProRes codecs for high-resolution footage, but the files can be very large. Some Canon DSLRs support shooting video in MOV format, but if you use a modern Canon mirrorless camera, cine camera or camcorder you would shoot in other formats and transcode to MOV if necessary for post-production, or export footage as MOV for delivery if this format is specified.
  • In the same way, you might sometimes wish to export to other formats such as WMV (Windows Media Video format) for specific delivery requirements, but all the possible permutations of post-production and delivery workflows are outside our scope here.

A pair of hands hold a Canon EOS R5 C, selecting a movie recording format option on the menu screen.

The Canon EOS R5 C offers a comprehensive range of video formats to record in, from a choice of RAW formats to MP4 (H.264). The Main Rec Format menu includes information about the bit depth and Chroma sub-sampling method of most formats as well – read on to find out all about these.

The back of a Canon camera, showing the Movie rec. size menu screen options.

In addition to a choice of video format, Canon cameras offer a range of settings on the Movie rec. size menu screen that will determine the quality of video recorded and the resulting file sizes. The screen helpfully displays the total recording time you can expect to achieve using the settings selected.

Recording size, frame rate and compression

We've mentioned that there are many variables when it comes to video formats. In addition to selecting a codec and container format, you can choose from a range of settings that will determine the quality of your video and the resulting file sizes, which in turn affect the duration of recording possible on your memory card.

In your camera menu under Movie rec. size, you can set three important parameters:

  • Recording size: This is what is usually called resolution in still photography – the number of pixels in each frame. Common settings available are:
    • 4K DCI (4K-D in menus): 4096 x 2160 pixels.
    • 4K UHD (4K-U in menus): 3840 x 2160 pixels.
    • Full HD (FHD in menus): 1920 x 1080 pixels.
  • Other options are available on different cameras, particularly pro video cameras, including 8K DCI (8192 x 4320) and 8K UHD (7680 x 4320) on the Canon EOS R5 and EOS R5 C. As you would expect, filming in 4K produces larger file sizes than filming in Full HD, other settings being equal, but normally you would simply select the frame size appropriate for your intended output requirements, or else record at the best available resolution and resize for output later, depending on the post-production options available to you.
  • Frame rate: This defines how frequently video frames are captured, expressed in frames per second (fps). The options available here depend on what is selected under the general settings in Video system: PAL (Europe) or NTSC (North America and Japan). The standard frame rate is 25fps for PAL or approximately 30fps for NTSC. Footage shot at 50/60fps can be slowed to half-speed slow-motion when played back at 25/30fps in post-production. Higher frame rates such as 100/120fps or sometimes more are available on some cameras for super-slow-motion effects on playback at standard frame rates. You'll notice that most of the settings available end in p, such as 50p, but some end in i, such as 50i or 59.94i. The p stands for Progressive and the i for Interlaced. If it uses interlaced scanning, a display refreshes alternate lines of the image on-screen – first the even-numbered lines, then the odd-numbered lines, then the even-numbered lines again, and so on. This happens so quickly (typically 30 times a second or more) that the eye perceives a complete picture, but each field technically contains only half the image detail. All modern computer screens and TVs, as well as video on the internet, use progressive scanning, where each frame of video contains the entire image. However, broadcast video used to be interlaced as standard, because this delivers higher apparent resolutions even when bandwidth is constrained. Today, 50i is still the Broadcast Standard in Europe, but broadcast video can be either interlaced or progressive. (Modern screens automatically de-interlace any incoming interlaced signals.) Video is now typically shot using progressive settings and converted to interlaced when this is the specified delivery format. If you do use an interlaced setting, note that the quoted number is the field rate, and the frame rate is actually half that number, so 50i is 25fps and 59.94i is 29.97fps.
  • Compression method: In addition to the codec used to encode and decode the video, you can often select a compression method. This does not determine the codec or type of compression used, but rather specifies how the codec is to be applied:
    • All-I: the I stands for Intraframe, and in this method each individual frame is compressed, one at a time. This does not produce file sizes as small as the other methods available, but potentially results in better quality, which is ideal for editing in particular – because there is more picture information, the files can withstand more extensive editing in post-production.
    • IPB (Standard): also known as GOP (Group of Pictures), this method analyses one keyframe (the I frame) and subsequently records only the differences between frames instead of the full picture information for each frame. It does this using two methods: P frames (Predicted frames), which record what has changed from the previous frame, and B frames (Bi-directional predicted frames), which can reference picture information in both the previous and subsequent frames. Each I frame may be followed by a variable number of P and B frames. Depending on how much stays the same between frames, this method can result in significant file size savings.
    • IPB (Light): this method uses the same principles as IPB (Standard) but records the video at a lower bit rate (more about this shortly). As a result, the file sizes will be smaller and the playback compatibility will be higher.
    • Long GOP: this generally refers to an extended Group of Pictures (more than 15 frames). Long GOP is the term usually used on Cinema EOS cameras for the inter-frame compression method; on EOS hybrid cameras, the usual term is IPB. The two are basically the same method, but they differ in the number of frames in the GOP – Long GOP has more P and B frames to one I frame, meaning file sizes are smaller but the video quality is lower, although this may not be very noticeable at lower resolutions.

An illustration of Chroma sub-sampling, showing the Luma and Chroma components sampled at ratios of 4:4:4, 4:2:2 and 4:2:0 respectively.

Chroma sub-sampling is a technique for reducing file sizes by discarding some colour information while retaining luminance or brightness information. The process is referred to using different terms such as YCC, YCbCr and YUV, but in essence works as shown. The ratio 4:4:4 means that in a block of eight pixels (4x2), Luma (luminance) and Chroma (colour) information is retained for all pixels. The ratio 4:2:2 means all pixels have Luma information but only 2 in the first row and 2 in the second retain Chroma information, which is then simply copied to the adjacent pixels. The ratio 4:2:0 means 2 pixels in the first row, but none in the second, have Chroma information, which is again copied to adjacent pixels (in this case on both rows).

An illustration of Chroma sub-sampling on a larger scale, showing the Luma and Chroma components sampled at ratios of 4:4:4, 4:2:2 and 4:2:0 respectively.

Chroma sub-sampling viewed on a larger scale. The colour detail is clearly simplified from 4:4:4 (unsampled) to 4:2:0 (middle column, top to bottom), but because the Luma information is intact, the image does not lose as much detail as you might expect (right-hand column).

Other factors affecting file size and recording time

The parameters we've looked at all affect the size of the video file, and therefore the duration of movie you can record on a memory card of given capacity. There are, however, other factors that affect video file size: bit depth, bit rate, and Chroma sub-sampling. You cannot normally set these directly at recording; instead, they are determined by the file format and resolution (movie size) selected.

Bit depth

Just as with still image files, this is the number of bits of digital data allocated to storing each pixel's tonal and colour information. Higher bit depth means more tonal and colour detail can be recorded, allowing for smoother tonal gradients and finer adjustments when editing. However, higher bit depth means larger file sizes. RAW video files are either 12-bit or 10-bit, XF-AVC and MP4 (HEVC) are 10-bit formats, and MP4 (H.264) is 8-bit. In the context of video, bit depth is sometimes called colour depth, in order to avoid confusion with the following term.

Bit rate

Also known as data rate, this is the amount of information recorded (or played back) in one second, which affects both video quality and file size. In the case of video it is usually expressed in megabits per second (Mbit/s or Mbps). On the EOS R3, for example, the bit rate might be as high as 2,600Mbps (recording RAW video at 50fps in 6K resolution); filming in 4K DCI resolution at 25fps using XF-AVC with All-I compression might mean the bit rate is approximately 470Mbps.

Your camera manual will include a table of the bit rates at different settings along with the resulting file sizes and approximate recording times, allowing you to assess the most suitable settings for the job, calculate how many memory cards you'll need to take with you and, crucially, ensure you use cards with a high enough write speed to cope with the data rate.

Just as importantly, video files with high bit rates require fast internet connections to view, and some can't be played on mobile devices. For YouTube, for example, it is recommended that 24fps 4K footage should be at 44-56Mbps. This becomes a factor to consider when encoding your footage for delivery. Also, a process called VBR (variable bit rate) is widely used to minimise file sizes for encoding and streaming. Some camcorders use VBR for recording, but in most cases this becomes relevant in post-production rather than recording.

Chroma sub-sampling

The human eye is more sensitive to brightness than to colour, so it is possible to compress the colour (Chroma) data in a video without perceptible loss of image quality, provided that the brightness (luminance or Luma) information is preserved. This is done using a sampling process: instead of recording the colour of every single pixel, the algorithm records the colour of a given number of pixels within a block, which is conventionally two rows of four pixels each. The sample is expressed as a ratio, such as 4:2:2 or 4:2:0.

A ratio of 4:4:4 indicates there is no sub-sampling. The first number specifies the size of the sample, in this case 4 pixels, which corresponds to the number of pixels in each row with Luma information. The next number tells you how many pixels in the top row have Chroma information (all four, in this case) and the last 4 means all four pixels in the bottom row have Chroma information. A ratio of 4:2:2 means only two pixels in the first row and two in the second row include Chroma information. This means in effect that in the block of eight pixels, each pair of pixels in each row is represented as the same colour, so only half as much colour data needs to be saved. A ratio of 4:2:0 means only two of the four pixels in the top row have Chroma information and none in the second row, meaning the top row is represented by only two colours and the bottom row simply reflects this, so in effect the block of eight pixels has become a block of two colours.

This sampling clearly simplifies the colours in the image, but there is still Luma information for each pixel, so image detail is not perceptibly compromised. Just as with compression, however, there are pros and cons to the different ratios – using 4:2:0 saves a lot of space on your memory card, but there is less colour information to use, which may cause problems should you be working with colour-critical processing techniques such as green screen. This might be another factor to consider when choosing a file format: XF-AVC and MP4 (HEVC) use 4:2:2, while MP4 (H.264) uses 4:2:0.

A still from an ungraded video, shot with Canon Log, showing a boat tied to a jetty with greenery in the background, appears washed-out and low-contrast.

Using Canon Log profiles produces video with the maximum dynamic range, but the footage straight from the camera has low contrast and saturation and requires grading.

The same scene of a boat tied to a jetty, shot using Canon Log, after grading, with the colours and tones much richer.

After grading, the colours and tones are much richer. Canon Log footage holds more tonal information that can be utilised in post-production to deliver a far greater tonal range between the darkest and brightest areas.

Using Canon Log

We've seen that different file formats, colour sampling or compression methods and numerous other settings may all affect how much image information is captured in a video. We've also noted that the human eye is particularly sensitive to brightness information, which means that maximising the dynamic range in your footage is arguably the most important factor in the perceived quality of the final output. It's for this reason that Canon developed one additional setting, applying a logarithmic tone curve to preserve as much tonal detail as possible in a video file of manageable size. This is distinct from video file format and other settings we have looked at. Canon Log makes it possible to save footage with extra-wide dynamic range, low noise and generous exposure latitude for easy exposure corrections, for example when recovering detail from a slightly overexposed sky.

Canon Log was introduced in 2011 with the EOS C300 (now succeeded by the EOS C300 Mark III), and many Canon cameras now offer one or more Canon Log profiles. Whichever version is used, images out of the camera have low contrast and saturation, meaning they need to be graded in post-production. This can be done manually or by applying a Look-Up Table (LUT), which can make the process very quick and easy, but for workflows where even this is too much, such as fast-turnaround broadcast requirements, cameras offer non-log profiles which can help users get a specific look in their footage without the need to grade in post, such as BT.709, PQ and HLG.

The Custom Picture menu screen of a camera, with Canon Log 3 being selected from a range of options including Canon Log 2, BT.709 Wide DR, PQ and HLG.

Cameras frequently offer more than one Canon Log variant, as well as a choice of other profiles – such as BT.709 Wide DR, PQ and HLG in this case – that might deliver the look you want in your footage without the processing that Canon Log requires in post-production.

A frame of a woman on a monitor, with the left half of the frame low in contrast and saturation but the right half clearer and brighter.

Canon Log profiles give videographers more latitude to grade footage in post-production.

What are the best video formats?

What video file format is best all depends on how you intend to use the footage – there are preferred formats for specific jobs. For example, H.264 is often used as a codec for web streaming because it offers a good balance of efficiency and compatibility. If your end goal is to share your video on web platforms such as YouTube or Facebook, then exporting your footage as an MP4 (H.264) file rather than a .MOV file will create a smaller overall file size with video quality well suited to streaming delivery.

Sometimes the choice of video file format will depend on the circumstances in which you are shooting. For example, if you are shooting 4K but running out of space on an SD card, then you could switch from 4K 50p Standard IPB to 4K 50p Light IPB, which will double your possible recording time on a memory card of the same capacity.

All file formats aim to strike a balance between image quality and file size (and hence recording time), but in extreme circumstances you might need to alter this balance and select a more compressed format, even at the cost of reduced image quality.

Another scenario could be that you are restricted by the post-production software and hardware to which you have access. Slower laptops, for example, may struggle to render high-resolution 4K 50p footage, whereas knowing you can drop down to either a lighter codec or even a lower resolution such as Full HD, if you really have to, will enable you to edit your footage more smoothly, assuming of course that the quality of the final video will still meet the standard required.

A user holds a Canon EOS R5 C and inserts a card into one of the two card slots.

Cameras with two card slots, such as the EOS R5 C shown here, are capable of recording high-resolution video to both cards.

The back of a Canon EOS R5 C on a tripod, showing the 2nd Card Rec Functions menu screen.

The formats available vary depending on the camera and the types of card slots, but on the EOS R5 C and other cameras with high-speed card support, you have the option of simultaneously recording an instant backup or proxy at a lower resolution.

Video best practice

Experienced videographers adopt various techniques and workflows for different video needs. One example of this is creating proxy files when they edit their footage using software such as Media Encoder, which will produce a low-resolution version of the file to take the strain off the computer so it is possible to edit more efficiently. The edits are then applied to the high-res file on export.

With some cameras, including the Canon EOS C70 cinema camera and the XF605 pro camcorder, you can take advantage of the dual SD card slots to not only make an instant backup of your footage as a safety net, but also to record a proxy or a lower resolution version of the footage simultaneously to the second card. On the EOS C70 and the EOS R3, thanks to the high-speed card slots, you can record RAW footage to one card and MP4 simultaneously to the other.

After you've uploaded your footage to your computer, it may be tempting to compress your master files using software such as Handbrake to save space on your hard drive, but doing this will compromise the quality should you wish to edit the files in the future, so it's best avoided.

Matty Graham and Alex Summersby

Related articles

Pro video terms you need to know

Stepping up from stills to video? Our glossary of 32 pro video terms, from Canon Log to Wide DR, will take the fear out of filmmaking.

Creative colour: how Canon Log enhances filmmaking

Cinematographer Ivan D'Antonio describes how he uses Canon Log to get the best from his cameras, refine his footage and realise his personal vision.

All about the RF mount

The RF lens mount is at the heart of Canon's EOS R System. Find out about the many innovations and design advances it has made possible.

Image formats (RAW, JPEG, HEIF) and compression

Find out about different file formats – RAW, C-RAW, HEIF and JPEG – bit depth, file sizes and different types of image compression.

Get the newsletter

Click here to get inspiring stories and exciting news from Canon Europe Pro