View page as slide show

Film and Animation

Introduction to the what, how and why


Purpose:

  • To improve quality
  • Reduce errors in video production

Focus:

  • Resolutions
  • Interlace vs. Progressive
  • Encoding
  • Quality
  • Detail
  • Check / Recheck

How:

  • History
  • Understand the why

History of Animation

  • Cave paintings
  • Da Vinci (~1510)
  • Thaumatrope (1824)
  • Zoetrope (1832)
  • Flipbook (1868)

History of Animation

cave.jpg

Cave paintings


The earliest attempts at animation can be found in cave paintings. They would draw multiple positions of an animal superimposed to imply movement. Also multiple frames were drawn of a single object to simulate animation.

History of Animation

davinci.jpg

Da Vinci (~1510)


Da Vince (among others) drew multiple frames of an animation on a single paper to give the impression of rotation around an object.

History of Animation

thaumatrope.jpg

Thaumatrope (1824)


2 images when spun create the illusion of a monkey in cage. This is not really animation in the traditional sense, but did give way to the idea that showing images in rapid succession could give the illusion of something else.

History of Animation

zoetrope.jpg

Zoetrope (1832)


Based on the idea of Persistence of Vision this is probably the earliest animation device. The slits in the device act as a kind of strobe so you only see one image at a time. You can see these devices at a lot of museums.

History of Animation

Zoetrope (2000)


Disney used a modern variation on this mechanism for the above 3D zoetrope. Instead of slits it uses an actual stroboscope for the strobe.

History of Animation

Flipbook (1868)


Everyone knows flipbooks. Most people here will have made one of them at one point in their lives. They were used as a medium for the first lengthier animations and using photography also the first films. One could buy booklets with animated stories.

History of Animation

magiclantern.jpg

Magic Lantern (1671)


The magic lantern also known as Laterna Magica was basically and old version of the slide projector. The slides were painted on glass using translucent inks but later also contained photographs. Sometimes these slides also had add-ons where an extra glass panel would slide into view. A famous one was The Rat Eater for example; a man asleep where a rat would walk over his bed and enter his mouth. This projector was described in texts as early as the 2nd century in china, but not until 1671 in the west. The concept is still used today and the optics form the basis for our current-day film-projectors.

History of Animation

Film Projector (1888)


The oldest surviving film was a short scene called Round Hay Garden Scene by Louis Le Prince. It was filmed and projected using his own inventions.

Films remained soundless for a long time. Most theaters arranged for musicians to play along with a movie to set the mood. Soon studio’s were producing complete accompanying scores for these musicians to play. The so called “talkies” took some more time to develop. Since audio history in film is quite a large subject we won’t go into it further here. If you are interested in the development of “talkies” you should watch the film “Singin’ in the Rain”.

History of Animation

Humurous Phases of Funny Faces (1908)


The animation Humurous Phases of Funny Faces by J. Stuart Blackton is the first known film animation.

History of Television


The development of television is important because it is the origin of things such as screen resolutions, interlacing, aspect ratios, framerates and much more.

History of Television

  • Scanning Disc - Nipkow (1884)
  • Image Dissector - Farnsworth (1927)

It all started with the invention of the Scanning Disc by Paul Gottlieb Nipkow. Nipkow’s scanning disc was the first method of dissecting an image into a sequential signal that could be reassembled on the other side. Later Philo Farnsworth (after which Professor Farnsworth was named in Futurama) invented a more advanced image dissector which was very important for television as we know it now.

History of Television

nipkowdisc.jpg

Scanning Disc


The first television systems were entirely mechanical. The Scanning Disc was a disc with interspaced holes in it at different distances from it’s center. Behind the disc was a material called selenium that gave off an electrical pulse when exposed to light. When a hole passed in front of the aperture it would let light through onto the selenium which would send of a voltage change. Depending on the object in front of the aperture and where the hole was more or less voltage would be generated.

When the hole exited the aperture one so called scan-line has been transmitted. Then the next hole would pass into the aperture, though slightly lower and closer to the disc’s center.

On the other side the signal was used to control the intensity of a lightbulb, in front of which another disc rotated letting light through reconstructing the image from the other side.

Later Cathode Ray Tubes were used at the receiving end. The scan-lines of the scanning disc were recreated using electromagnets. The disc however was still used to transmit the images.

Mechanical Television

  • Newspaper / Still Images (~1900)
  • Telephonevision / Baird (1925)
  • Television Broadcasts (1928-1939)

The Nipkow Disc was used as early as the first decade of the 20th century to transmit images over telephone lines as a service for newspapers. These were however only static images.

The first implementations of a live television system were intended for use as a telephone vision system (something which still hasn’t quite catched on). It was first demonstrated by John Logie Baird. First only silhouettes (duotone) (1925) but later halftone black and white images, which was first demonstrated to the public in 1926. The resolution however was very low with only 30 scan-lines per frame.

The US first started a broadcast on July 2nd, 1928, from an experimental broadcast station (W3XK in Maryland, Washington DC). This was duotone, meaning only silhouettes could me made out.

The BBC started in september 1929 with programming commencing in 1930. In august they started it’s own regular service using Baird’s 30-lin system until 1935. They then upgrade to Baird’s newer 240-line system (which was still mechanical), and started a dual-system service in november 1936 using the new 405-line electromechanical system by Marcone-EMI, which was the world’s first regular “High definition” television system.

Electronic Television

  • Image Dissector (1927)
  • Vladimir Zworykin (1931)
  • Broadcasts (1936)

Philo Farnsworth developed the Image Dissector in 1927. The first electronic image rasterizer. It’s based on a material that gives off electrons when lighted. Using electro magnets the electrons were directed through an aperture in such a way that only one point of the image got through the aperture at any time. The measured amount of electrons corresponded to the light intensity at that point.

There was only one problem with the device. Because only one point worth of electrons were allowed through the aperture at one time this meant a lot of wasted electrons. And if you wanted enough electrons for a clear signal you needed a lot of light.

Vladimir Zworykin developed a method to use capacitors to store those electrons so they could be collected the next time the scan-line passed over them. This gave much better light efficiency. At RCA he developed the first successful electronic camera tube. However Farnsworth believed it interfered with a patent he owned for the Image Dissector. The courts agreed with him and RAC lost in court. RCA then licensed the technology from Farnsworth for $1 million (USD) which today with inflation is the equivalent of around $13.8 million.

Vladimir’s idea was used in Britain to design the Marconi-EMI Emitron tube cameras for the BBC. November 2nd 1936 a 405-line service started using these new cameras. The signal was broadcast from a mast on Alexandra Palace, which is still in use today.

At this time however there was still a great variation in standards. The french later used a 819-line system for example. This came together mostly around the time of Color TV.

Color Television US

  • NTSC B&W
  • 525-line / 486 (~480) visible
  • interlaced

Before World War II the National Television System Committee (NTSC) defined the first NTSC standard. This was a Black and White standard with 525 lines of picture (of which 486 were visible) and 30 frames per second.

Due to difficulties in timing and the decay of light on the cathode ray tube, the image was not traced from top to bottom in one go, but half of the lines first and then the other half. The image was divided in two fields. While the glow of the first field was still decaying the second field was being projected providing a constant light intensity. The recording technology used the same method which caused a natural time delay between the fields. Since the second field was scanned slightly later it also contained information about a point in time slightly later. This resulted in a pseudo 60 frames per second video instead of the 30fps per 525-line sequence).

Interlace

interlace.jpg


The above diagram explains interlaced video pretty well.

Color Television US

  • Mechanical - Bell Laboratories (1929)
  • Electronic - RCA (1940)
  • FCC approval CBS (1950)
  • NTSC approval (1953)
  • 3.1% colour by 1964
  • NBC catalyst (1965)

Bell Laboratories first demonstrated a working color television system in june 1929 by using three complete mechanical television systems and combining their output using mirrors. It used three mechanical cameras using color filters and three mechanical television sets with colored lights.

RCA developed the first electronic color tv in februari 1940 by again optically combining multiple images from complete electronic systems.

CBS was also experimenting with color television. In August of 1940 they demonstrated a partly mechanical partly electronic system. The camera had a rotating disc with red green and blue filters spinning past the camera at 1200 rounds per minute. The same happened on the receiver side.

After World War 2, three color systems were in the race for FCC standards approval. CBS, RCA and CTI. CTI and CBS systems were incompatible with the current NTSC standard. During the campaign for approval CBS started showing color demonstrations to the public. The offered regular color programming for one hour a week.

The FCC found the systems by RCA and CTI fraught with technical problems, inaccurate color, and expensive equipment. CBS system was approved in 1950, with the first network broadcasts starting 1951.

However CBS system was not compatible with NTSC and there was a great lack of color receivers. In an attempt to turn the tide CBS bought a television maker and started production with a color set in 1951. 200 sets were shipped of which 100 were sold. CBS however decided to pull the plug at the end of 1951 and tried to buy back as many sets as they could.

In the meanwhile NTSC had been working on a color system which was compatible with existing black and white sets. CBS officially testified before congress in 1953 that they had no further plans for their color system, after which NTSC submitted its system to the FCC and was approved later in 1953 as the official national standard for color television.

NTSC color broadcasts started in 1954. RCA was at that time the manufacturer of the most succesful line of color sets and by 1959 the only remaining major manufacturer. CBS and ABC networks were not affiliated with any manufacturers and were not eager to promote their competitor network by promoting their color television sets. CBS ceased all color programming from 1960-65 and ABC delayed color broadcasts until 1962.

By 1964 only 3.1% had color television sets. NBC proved to be the catalyst that got the adoption rates up. They announced a prime time schedule for 1965 almost entirely in color. Sales went up, and most networks followed suit by 1966/67. Sales went up but did not exceed black and white sales until 1972.

Color Television US

  • NTSC (Never The Same Colour)
  • Seperate Colour signal

NTSC was however criticized for its color production. NTSC used a separate signal for its color encoding to remain compatible with black and white sets. Black and white receivers could simply ignore this extra channel. Timing issues in this extra signal however somtimes resulted in hue shift (color is slightly out of place compared to luminance). Also the lower bandwidth of the channel the color signal is transported in results in lower precision.

Color Television EU

  • 625-line / 576 visible (interlaced)
  • Hue error NTSC
  • State-owned television

Europe developed color television much later. But why? And why not just use the NTSC system.

Europe had developed a higher resolution 625-line system at a lower framerate (25 frames per second). This is linked to the fact that the US uses a 110V/60hz power system and we use a 220V/50hz power system, which explains the different framerates (30 vs. 25).

This made the NTSC standard incompatible with the current standard. And would force people to buy new television sets. Also NTSC was perceived to be of inferior quality due to the lower resolution and the hue error problems NTSC was infamous for.

Another reason is that most television systems in europe were state-owned, which is why there is far less commercial motivation to adopt a color system, whilst in the US all television is corporate owned.

Color Television EU

  • SECAM (1956)
  • PAL (1963)
  • BBC2 (1967)
  • Italy - ISA (1977)

In 1950s work started on various colour implementations. First the French registered the patent for the SECAM color system in 1956. SECAM is still used in France.

The Germans developed the PAL standard in 1963. PAL stands for Phase Alternating Line. It used a synchronization system for color which at the time was very expensive, but provided a simpler implementation (PAL-S) at lower cost, but lower quality. The full specification (PAL-D) used expensive glass production process. However by the time PAL broadcasts started hardware costs were down substantially and virtually no PAL-S displays were ever made.

The first regular broadcasts in Europe started by the BBC on BBC2 July 1st 1967 using the PAL system, shortly followed by the French (SECAM). The PAL spread across the world through the old British, Dutch, Belgian and other colonies. SECAM was adopted in some of the old French colonies.

Italy insisted on developing their own color system called ISA, but finally gave up and adopted PAL officially in 1977.

It took some time before all broadcasts were in color and everyone had color sets. This means even though you might not remember much of it you have probably grown up with Black and White television yourself to some degree.

Black and White TV was still broadcast in most places till somewhere in the eighties.

Digital Era


Welcome to the Digital Era. There is a fundamental difference between TV stemming from an analogue era with even a mechanical history, and the current digital displays.

TV however lies at the base of why our current resolutions, framerates, etc. are as they are.

Digital Era

Analogue vs. Digital


Until the digital era the notion of a pixel was an almost non-existant one. We only had scanlines (accounting for our vertical resolution). However horizontally each line was just an analogue pulse of light intensity.

Digital Era

Pixel


Colour first introduced the notion of dividing the horizontal signal into sections (pixels) for specifying colour. This was however a “sideeffect” of the capture and display mechanism and not an intended “feature”. With the advent of the digital age the pixel however has become much more important.

Digital Era

  • A pixel is not a pixel
  • Pixel Aspect Ratio (PAR)
    • Computers 1:1 (square pixels)
    • VideoCD (352-288 - PAL) 1.09:1
    • CD-i (384×288) 1:1
    • DV (720×576 - PAL) 1.07:1
    • Widescreen (720×576 - PAL) 1.42:1

However a pixel is not a pixel. To be more precise one pixel is not just any other pixel. A pixel is just a piece of screen that has one value (colour, brightness). If I choose to divide a screen of 40×30 cm into 400 cols and 300 lines I will have pixels 1mm high and 1mm wide. However if I choose to divide it into 400 lines my pixels will be wider than they are tall. A pixel is not a pixel.

The definition of how wide a pixel is compared to it’s height is its Pixel Aspect Ratio commonly referred to as PAR.

Computers usually use an Aspect Ratio of 1:1. This however is not true for people who are running 1280×1024, but is for most other standard resolutions such as 640×480, 800×600, 1024×768, 1600×1200 etc. You can calculate the PAR easily by taking the resolution of your screen and it’s Display Aspect Ratio (or DAR) and comparing them. Horizontal Resolution / Pixel width and Vertical Resolution / Pixel height should give two equal values. Or DW/DH should be equal to PW/PH.

One of the first digital video mediums was not as you might expect the LaserDisc. LaserDisc was an analogue medium.

VideoCD however was digital. It had a resolution of 352×288 (PAL) or 352×240 (NTSC). VideoCD had no interlacing. Each line was simply double. Each pixel was doubled in height and width. This results in a resolution of 704×576, which is why you sometimes still see the number 704 floating around.

CD-i by Philips was mostly compatible with the VCD standard but also allowed for slightly higher resolution of 384×288 which on a 4:3 screen gives a PAR of 1:1.

DV has been defined as being 720 pixels wide, however some DV camera’s also support 768 as do most DVD-players nowadays. 720 is preferred though because not all software and hardware handle 768 properly.

  • With 720×576 for PAL this gives a PAR of 1.066666 to 1 (1.07)
  • With 720×480 for NTSC this gives a PAR of 0.9

Widescreen content is still restricted to the specification of DV. This means the content is still 720 wide and due to how Television works there are still 576 lines transmitted for PAL (or 480 for NTSC). This means when displayed on a 16:9 screen the image is stretched to fill the screen and the pixels are much wider than they are tall. This is also called anamorphic widescreen.

What does it all mean?

  • 4:3
  • 16:9

When preparing content for 4:3 this doesn’t have a whole lot of impact. The difference in PAR between 4:3 TV and 4:3 computer (1:1 to 1.07:1) isn’t that big. It’s only slightly noticable, but if possible should be prevented (eg. set correct PAR in max). Primarily round objects seem slightly skinny and oval. Some effects in processing software don’t work well with non 1:1 PARs but for 4:3 this also is hardly noticeable.

However 16:9 requires an alternate workflow. When displayed on a 1:1 PAR screen (with Aspect Ratio correction on) the 720×576 resolution effectively becomes blown up to 1024×576. Which is why this is a resolution often used in rendering. It is easy to work with (PAR 1:1) and to process later.

When rendering at 1024×576 we later downscale this to 720×576 and set the DAR to 16:9. Alternatively we can also render at 720×405 and upscale to 720×576. Which onscreen will be expanded to 1024 again.

Method 1 results in slight horizontal quality loss. Method 2 results in vertical quality loss, which is significantly more noticeable than Method 1.

A third option is rendering directly to 720×576. However this is only an option if all software in the production chain supports this properly and can handle alternate PARs than 1:1. Usually it’s easier to just render to 1024×576, however rendering directly to correct PAR would result in greater quality and would be better. It is important to check if the effects you want to layer over your video can handle non 1:1 PARs correctly. If not this might not be a worthwhile method, since you would have to upscale to 1024×576 and later downscale again to 720×576 which results in greater quality loss than rendering directly to 1024×576 and only scaling once.

What does it all mean?

  • Progressive (Computers)
  • Interlace (TV)
  • Renders (Progressive)
  • DV footage (Interlace)

Computers by nature produce Progressive Scan images. This means the entire image is built up in one go. This is contrary to Interlace when the image is built up in two passes or “fields”.

When we render animations using the computer this also results in a complete image per frame of the animation (progressive) instead of two half images with a temporal offset to eachother (interlace).

DV footage however is just about always interlaced. This combination therefore presents a bit of a problem.

Combining Progressive & Interlace footage


There are two ways to combine progressive and interlaced footage.

Combining Progressive & Interlace footage

  • Convert progressive to interlace
  • Deinterlace
    • After Effects
    • VirtualDub

Converting progressive to interlace can be done using pulldown algorithms. This is mostly used to convert film (24fps progressive) to PAL(50i) or NTSC(60i). PAL 2:2 pulldown plays back each frame in two fields, resulting in a 4% speed increase. Modern conversions often use 2:2:2:2:2:2:2:2:2:2:2:2:3 pulldown so there is no speed change. NTSC 2:3 pulldown plays back the first frame in two fields, the second in three field, the third in 2 fields en the fourth in 3 fields. This scales 4 frames smoothly to 5. The advantage of this is that motion is very smooth when interlaced. It is also easy to revert this pulldown method later, since no data has been destroyed.

Interlace footage also has several big disadvantages however. Scaling is impossible without major artifacts. Image is not as steady as progressive. And effects editing causes temporal artifacts.

Usually the best thing to do is to deinterlace your footage. This converts interlace footage (back) to progressive. There are several algorithms available. Deinterlacing can result in motion artifacts, however it is generally better than doing it the other way around. There are some very good quality deinterlacers available.

Deinterlace! This converts interlace footage back to progressive. Several algorithms available. Can result in motion artifacts, but generally better than doing it the other way around. Very good quality deinterlacers available.

Deinterlace


The default deinterlace filter in After Effects is quite good. However be sure to set preserve edges. This does higher quality areabased deinterlacing. If the interlacing artifacts are still really noticeable there are very good deinterlace filters available for VirtualDub.

http://www.guthspot.se/video/index.htm

An interesting option to note is guess 3:2 pulldown and 24Pa. These have to do with interlaced footage which originated from film. More about that later.

Safe zones

Title and Action safe zones


A television has a tube onto which the image is projected which is larger than the visible area. Also the outer area has strong deformation due to the curve of the tube. Also the frame of the television overlaps some area of the tube. That is why not the entire resolution is necessarily visible. The visible area differs between television sets. That’s why there are zones which are agreed upon to be guaranteed to be visible and useable for text. These are the action safe zone (visible and without deformation enough to follow actions) and title safe (visible and far enough within view to be well readable).

Framerates


We talked earlier about film (3:2 pulldown), and 24Pa pulldown. To understand this it is important to first talk about standard framerates.

Framerates

- 24p Cinematic film - 25p PAL progressive - 30p NTSC progressive - 50p PAL progressive - 60p NTSC progressive - 50i PAL - 60i NTSC / PAL-M


Most cinema releases are shot in 24 frames. Due to the nature of film this is by definition progressive.

25p PAL is 25fps progressive material. This is supported by the DVD standard and the way we usually supply our films.

30p NTSC standard. This is for us not of much importance

50p Double framerate progressive PAL (720×576) however not supported much. Most televisions support it in some way or another, but not many DVD players. So called progressive DVD players have high quality realtime deinterlacers allowing 50i material to be upscaled to 50p to be displayed on capable television hardware. 50p is mainly of interest for HD footage where it is supported.

60p same for NTSC

50i regular interlaced PAL video

60i PAL nowadays also accustoms for 60hz video to allow easy transfer of US shows to EU tv. For us not of much importance.

2:3 pulldown framerate conversion


When 24p film footage is converted to interlaced footage for display on TV a 2:2 pulldown (PAL) or 3:2 pulldown (NTSC) is performed. This pulldown can be reversed without losing quality. This is different to regular deinterlacing, and is only useful for footage originating from film.

PAL reverse 2:2 pulldown can be achieved by just combining the first and second field. Usually in PAL DV footage this means just ignoring the interlacing.

For NTSC footage 3:2 means the first 3 fields contain frame 1, then 2 fields contain frame 2, then 3 fields contain frame 3 and 2 fields contain frame 4. This results in 5 interlaced frames which originated from originally 4 frames. = 24/4*5 = 30

There also exist other field orders which result in better quality or allow for better encoding or other advantages. Therefore there exist slight variations to this mechanism. Primarily 2:2:3:3 pulldown as a variation to 2:3:2:3.

Reverse pulldown is the reversal of this conversion. Since all fields are still intact a restoration of the original signal is possible.

This is what the guess 3:2 pulldown and 24Pa pulldown options are for. It is very unlikely that you will have very much to do with this.

HD


Reasonably new is the addition of the “HD standard” to all this.

Initial trials with HD have been held as early as the 1940s by the french which had a 819-line (triple interlaced) black and white system in use.

Originaly when Electronic Television (in contrast to mechanical) was introduced this was touted as High Definition Television due to it’s much higher resolution than the then current 240-line mechanical TV system.

In the 1990s tests with HD-MAC analogue HDTV failed. Japan is the only country where a commercially successful analogue HDTV system has been launched. The system called “Hi-vision” has 5:3 DAR with 1125-line image with 60 interlaced fields per second.

HD

  • Digital
  • Double framerate (50/60fps)
  • PAR 1:1 (usually)
  • 576i/p (SD)
  • 720p
  • 1080i/p

Some noteworthy things about HD:

HDTV in it’s current form is purely digital. It is delivered for decoding by a digital receiver. Usually in and MPEG2 transport stream or a H.264 transport stream. HD-DVD and Bluray mostly use either H.264 or VC-1.

Has a pixel aspect ratio of 1:1 usually. Within the standard SD (Standard Definition) is defined which uses legacy PARs, and the HD-CAM standard also deviates from the 1:1 PAR (1440×1080, 16:9, PAR 1.33).

Standard identifiers for HD are the number of visible scanlines followed by a p or i.

SD is in the HDTV standard defined as 576i which equals 720×576 for either 4:3 or 16:9 as explained earlier.

720p means 1280×720 (720/9*16) 720 is not available as interlaced.

1080i/p means 1920×1080.

1080i has a framerate of 50/60 fields per second, and 1080p and 720p has a framerate of 50/60 frames per second. This means that 1080i actually has an effective resolution of 1920×540 combined with the problems interlaced usually has on computerlike displays (which LCD and Plasma TVs are) usually results in worse quality than 720p!

1080p is the clear winner ofcourse.

What is your target?


Before deciding on what resolution and what method of output (p/i) you create your video, you should first consider “What is my target?”

What is your target?

  • Web vs. Video
    • Web: multiples of 2, ask designer / builder
    • Video: never deviate from standards
  • 4:3 vs. 16:9
  • CG vs. DV
  • Progressive vs. Interlace
  • Television vs. Computer presentation
  • SD vs. HD

If targeting for the Web you should always use a PAR of 1:1 and request the exact resolution from the designer or webbuilder for the website the video is to be used for. Good guidelines are however: always use a multiple of 2, sometimes a multiple of 8. Ask before doing. A lot of codecs (among which VP6 which is used for flash video) have serious problems with non-divisible-by-two resolutions.

When developing for anything else, never deviate from the standards unless specifically told to do so.

Be sure to know whether you are developing for 4:3 or 16:9. Make sure all footage (or as much as possible) is available in the target DAR.

Is the project Full CG, or Full DV, or a combination? Try to get PARs to match as closely as possible to require as little conversion as possible. If you have DV at 1.07 PAR try to render your content at the same resolution and PAR instead of upscaling the DV or downscaling your Rendered content. In general don’t do unnecessary conversions.

TV or computer presentation? When an animation will be presented using a Television using a DVD player this is different than when you are making an animation that will be presented on a widescreen LCD screen from a computer. The DVD will have a PAR of 1.42 (widescreen) and the computer of 1.

Progressive vs. Interlace. Never unless you are absolutely sure what you are doing and have a very good reason for it, output anything with interlacing. Always remember to check all footage you import in any project to see if it has interlacing and use the appropriate deinterlace features.

A computer might benefit from using the extra resolution when rendering to 1024×576 or even higher, but a DVD will not. Also a DVD might benefit from interlacing and a computer will not. In general avoid interlace video if at all possible.

SD vs. HD. This is usually a specific choice by the client. However it is good to ask this. This means a huge jump in render times ofcourse but it also requires extra time in modeling details and this should be weighed as well.

What is your target?

  • Web - ASK designer / builder for resolution
  • DVD
    • 720×576 25f - PAR 1.07 / 1.42
    • 1024×576 25f - PAR 1
  • HD
    • 1280×720 50f (can use framedoubling)
    • 1920×1080 50f (can use framedoubling)
  • Custom - depends on client

So in short: When developing for web, ask the designer and/or builder. I’d prefer if you asked the builder. He/she will, or at least should, know about restrictions in the codecs used.

When developing for DVD, the output will always be 720×576 with a DAR of 4:3 or 16:9 however you may use 1024×576 for 16:9 as intermediate format.

HD footage has a framerate of 50fps. This means twice the number of frames. You can also use 25fps which on the playback device will be doubled. You could use 60fps or 30fps but 50fps is better because it is easy to convert later to DVD as well.

If the client has custom wishes this should be clearly defined upfront.

Quality


Until now we’ve only talked about history and why standards now are the way they are. Another important aspect of preparing content is Quality control and Quality perception.

Quality

  • Control
  • Perception
  • Conversion
  • Quality of work

Firstly Quality control means checking regularly. Means in short getting full control over the quality, not only of the actual renders, but of everything else as well. This means controlling the quality of the output files of the renders (not just the render itself, but the jpeg quality of the jpg sequence that’s written to disc), framerates (check that they match with your project, i never want to see double frames due to incorrect framerates EVER again), check interlace settings in your project, don’t scale stuff unless absolutely necessary, don’t compress stuff unless absolutely necessary. Don’t do anything that reduces quality unless absolutely necessary.

Another important thing is the perception of quality. Good quality is what you perceive as good. We should and must set our standards high.

Another thing which greatly impacts quality can not always be helped, but whenever possible try. Whenever conversion (framerate/resolution/scale/compression) is done this aversely affects quality. This should be avoided whenever possible. If you should be zoomed in further, try to render it zoomed in further, or if you don’t know exact zoom yet, render at higher resolution so quality doesn’t go down when you upscale.

And possibly the most obvious one is the quality of the actual work. How good are your textures, the model, the lighting. I’m not an export in doing this by any means, but I can recognize quality. Pay attention to details like flickering textures, missing anti-aliasing, light leaks. Again Perception! This also extends to how neat your edit cuts are. Sometimes I see black frames between two scenes because a cut was trimmed too far. Also often I see text outside of the safe zones. This is fine for a webproduction because no pixels will be clipped, but for a DVD this means some text will become illegible. Keep an eye on details. The work should be finished on time, but it should also be of good quality.

Delivery

  • Internal
    • Uncompressed (10-bit)
    • XviD
  • External
    • MPEG1 (possibly future MP4)
    • DVD

All internal delivery of content should be uncompressed. Unless absolutely necessary (which we’ll come to later) always keep your files free of compression. If possible this uncompressed file should be 10-bit instead of 8-bit because of the higher colour quality. For this you need to set the color depth of your project to 16-bit though.

Sometimes someone in Groningen will need the file you prepared in Amsterdam. Sometimes that must happen within a timeframe in which it is not possible to send over the uncompressed files. As a last resort you may use an XviD with target quantizer 1. This generates the highest quality XviD possible. This is much worse than Uncompressed, however sometimes necessary. Try to plan around this so this isn’t necessary. On the Wiki you will find a complete explanation of how to set the compression for XviD and which version to use.

When delivering to a client use should always encode your videos into something you will know for sure plays smoothly and correctly on his/her/their system.

The only file format which plays on any computer (mac or pc) on any version (from Win95 to Vista) is MPEG1. Mpeg2 on most systems is not playable until special codecs are installed. Don’t let the fact that you might be able to play it trick you. If the client requests something other than MPEG1 (MP4, H264, Uncompressed, etc.) only then do so. If you don’t know what format? The answer is MPEG1. The Wiki contains all settings you need to know for a regular encode. The tool we use for this is TMPGEnc. Nowadays MP4 is becoming more popular, you can however still not assume it is supported (unfortunately). A lot of hardware players support MP4 and is default supported in OSX and primary video codec in VodCasts. However it has no OTB windows support. On the Wiki you will find a profile settings file for MPEG1. You can start TMPGEnc load this profile and start encoding right away. The only thing you might need to adjust are bitrate and resolution. The bitrate is set to 4000Kbps but sometimes higher (or lower) is necessary. MPEG1 does not support custom PARs, therefore any scaling to display correctly on PAR 1:1 must be done prior to the encode or in the resize settings in TMPGEnc. These settings are also explained on the Wiki.

Most clients will receive a DVD or DVD-image as a result. This image is usually made in Groningen. However on the Wiki you can find instructions on how to do this yourself. The tools we use for this are Canopus Pro Coder 2 and DVD-Lab Pro X.

What you should know


After this presentation you should know some things.

What you should know

  • Output resolution
  • Progressive vs. interlace
  • Quality what to look for and what (not) to do
  • How to deliver content
  • Check/recheck

What output resolution you should use.

That you should never use interlace, and you should always check your footage before using it in a project.

That you should never convert unnecessarily and set high standards for quality

How to output your video for delivery

Check everything you do. This means for example: if you send off a video to Groningen. Watch it first, whether you skip through it or watch it fully, make sure that you’ve watched enough to KNOW that someone on the other side won’t watch 10 seconds and comment on some glaring error. It might take you 5 minutes which you think you don’t have. However if you send the file to Groningen which takes an hour and they find the error and you have to correct it THEN, it will take another hour to transfer meaning that for the gain of 5 minutes you’ve wasted an hour towards the deadline.

Wiki

The Wiki is your friend


The wiki contains this presentation. And later this week will contain everything you heard here as an online video for reference. Also it contains even more info about resolutions and interlacing.

Questions?

Well, not at the moment

film/generic/start.txt · last modified: 2011/05/23 18:45 (external edit)