By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Traditional video editingIn order to understand the process of editing and adding effects in video post-production, it is necessary to review how the process has been done for years. This helps explain what is happening inside the microchips of a computer-based digital video editing system. Traditionally, video-editing systems would consist of tape machines and an electronic edit controller that controls these machines to perform cutting functions. The cutting process works like this: the relevant video information is selected by the user on the source tape and transferred to the record tape, leaving the redundant information behind on the source. The record tape becomes the master. The edit controller uses numbers (timecode) on the tape in order to find the accurate place that the user has selected to make the cut. If any kind of effect is required, a more complicated process is involved when using this method. Because of the linear nature of a piece of tape, it is then necessary to have two source machines in order to perform transitional effects between two pieces of video. Three machine (transitional) editing system If the user wishes to perform a transition, (dissolve, wipe or similar effect) between the end of clip A and the beginning of clip B, the process cannot be achieved with two video tape machines. This is because the last part of the A clip has to be running at the same time as the first part of the B clip in order for the transition to have moving video all the way through it. Therefore, another copy of the tape needs to be made and put into a second source machine. The result of the two machines is mixed by a vision mixer or switcher, which takes the two pieces of video and performs the transition at the required time. The output of all of this is recorded to the record tape. The edit controller controls all of the devices (VTRs, vision mixer) to perform the correct function at the correct place again typically by using timecode numbers. The traditional three-machine edit suite, aka an A/B roll system, incorporates individual devices which work in real-time to accomplish this process. Digital video editing Some years ago, this whole video editing process system was introduced inside a personal computer. This is most commonly known as non-linear editing. The process was first introduced to give producers and other program makers direct access to their images rather than have editors operate complicated edit controller equipment. Digital video editing eliminates the need to constantly wind tapes back and forth to view the material by allowing random instantaneous access from disk. These systems also have a graphical interface in order to get the operator as close to the material as possible; rather like in a film editing studio where the film can be held to the light in order to make the creative cutting decisions. This digital editing process has developed in nature, along with the capabilities of standard personal computers and compression methods, to the point where it is now possible to produce professional level productions on a standard computer and at a quality that is equivalent to that coming from any tape machine. Not only is the quality not an issue any longer but, the functionality has generally moved on; cuts are now normally performed in real-time with most digital video editing systems. This allows non-linear editors to be used in the online process rather than the offline. There are many advantages to doing this inside a computer. The first is that once the video is on the disk(s), it is stored in the digital domain which means that there is no chance for any loss of quality while it is in this domain. Provided the video doesn't go through any other kind of compression or recompression process, it can be duplicated and manipulated without losing the quality it was captured at. There are also advantages that include: No tape-drops outs (glitches) Instant access to the material More control over the image process while making decisions A higher quality end product Use of an open architecture image manipulation/creation programs that allow manipulation of the material that was previously only available from expensive dedicated hardware. For example: Character generation or 3D animation packages Audio versus video delays can be taken care of automatically So what's the problem? The problem is that while the end product is good in terms of quality, the trade off from the old way of doing things to the new is still unbalanced. A typical non-linear editing system today is still based around what is known as "single stream" architecture. This has the same problem as a traditional two-machine edit suite in that if the user wants to perform anything more than basic cuts, a duplicate copy of at least one of the pieces of video material must be made in order to perform this operation. The only way to do this with this "single stream" architecture is to use the same RAM and the processor inside the computer to take each frame one at a time from the end of an A clip and the beginning of a B clip. Then one should calculate the output and build a third clip on the hard disk of the computer with the result. The frames have to be decompressed by the decompression system and then the result needs to be recompressed for storage on the hard disk. This is known as rendering and applies to the calculations a computer must form to produce a finished result. "Single Stream" digital video editing system So the whole process relies on the speed of the personal computer that is being used. Because personal computers are not designed just for video manipulation, which requires manipulating a large amount of data, this process can be very slow even for a basic transition e.g. 5-20 x real time even for a simple effect, such as a dissolve. What this means to the user is a waiting time that generally is unacceptable to a professional and can be very costly. Some compression/decompression cards use hardware on board the card to accelerate the process of the rendering. While this can achieve faster results for the most basic transitions, this will never become real-time without the use of dedicated video processing. For instance: a dissolve between two video clips at VHS quality may take the accelerated system a few seconds to render. A page peel with sub-pixel manipulation, anti-aliased edges and broadcast quality filtering will tax the processing power to the point where it will slow considerably. To give an example of the kind of power required to create the effect described above in real-time, Pinnacle Systems' Genie 3D-DVE PCI card processes video at 16-billion specialised pixel operation per process. A Pentium 150Mhz processor performs around 50 million... And that is for all the functions a personal computer has to perform! Of course, because rendering allows creation of anything in the imagination with the right software, it can always produce the final result - eventually. The downside of specialised hardware is exactly that it is specialised, if the effect that is required is not provided by the custom board or chip a user may have to resort to rendering to produce the desired result. Dual stream digital video editing The way to overcome this rendering process for the majority of effects is to provide two streams of uncompressed video simultaneously to a dedicated mixing/effects device so that they may be manipulated in real-time. Personal computers are now at the stage where the right components (disk-sub system, processing power) can be put together to provide the data rates of two high-quality streams of video which can be manipulated to produce a final output relatively inexpensively. Dual stream systems have been around for a number of years but the hardware and software configuration required to produce an adequately performing system was expensive enough to preclude the larger number of users. Dual stream editing uses a dual-stream compression/decompression card and either a dedicated mixing/effects chip or, in some cases, a separate mixing/effects card to produce real-time edits. Because these dedicated pieces of video processing can have features like any other traditional video device, they can also perform high-quality titling or "keying" in real-time. The user can view the result of an effect or transition instantaneously; this makes accurate editing possible rather than waiting for something to render, then deciding that it doesn't work or is too long or short, changing it and having to render the whole thing again. The video is not compressed and decompressed every time a transition/effect is accomplished, resulting in higher quality. On a dual stream system, there is only one compression/decompression cycle; when the video is digitised to the hard disks and when it is decompressed to be fed into the mixer/effects units and played out. This is at least two compression/decompression cycles on a single stream system: one to digitise the video to the hard disk, one to decompress it for processing in the computer, one to recompress the result and a further decompression to produce the final play-out of the production. A large clip/file is not created at the end of the production for subsequent play-out. This saves disk space and allows the user to work at higher data rates (quality) with the footage. (Some Pinnacle Systems Single Stream cards overcome this with a feature called INSTANT Video). Titles are very clean and of high quality. One of the biggest complaints in the past with non-linear editors was that the titling was sub-standard. This is because an M-JPEG system that renders (re-compresses the titles) creates artefacts on the edges (aliasing) of the text. With a dual stream system, the title and, indeed, all the graphics, are uncompressed and played directly to the output resulting in the highest quality titles. For more information please contact Pinnacle Systems at www.pinnacleys.com ( 1999 Pinnacle Systems Compiled by Clive Morris