Evolution of video editing

Videos have always been an excellent way to represent situations, concepts and ideas in a creative and technical way in which telling a story becomes a journey through creativity and also logic.

Starting in 1958, video editing was taking its first steps, when the filmmakers cut and pasted pieces of tape, removing or swapping the scenes. And as early as the 1970s, between the mass production of VTRs (a device with which you could record moving images on magnetic tape) and the generalization of video post-production, so-called linear editing became the thing of choice. .

Since video editing systems were released, that was the first system to appear. However, this model had many limitations, which made all the work arduous. It had 3 ways to perform, Playrec, Assemble and Insert.

Playrec was very popular for being a very simple process with respect to the other two methods. It was simply a matter of pressing Play and Rec, in this way the audio and video resources were generated, which were recorded as recognizable information on oblique tracks, which allowed a series of subsequent alterations. Since Insert and Assemble lacked so-called sync support, they did not have the ability to record, as there were no tracks on which to do so. In front of Playrec, which in any area was a primary operation that needed five seconds of synchronism. However, the linear editing method presented many counterparts in its range of possibilities.

For example, for Insert, helical recording was used, which allowed one image to replace the other and not generate those moments of gray “snow” or “rain” that descended on the screen, that is, breaks were avoided. But in turn, if the exact duration of the insert was not known, it would corrupt part of the subsequent images without any type of recovery.

On the other hand, Assemble also had a very problematic counter. Since the heads record vertically, when the tracks are oblique, this method allowed to preserve the integrity of the recording when it started, but corrupted it at the end. This also generated that effect of snow / rain that only allowed to see the previous recording in a choppy way.

Not only could the master tape be damaged and deteriorate, but also, each time a copy was made, it lost approximately 8% quality compared to its original copy. And taking into account its limitations regarding not being able to play with the durations of the sequences, both to lengthen or reduce them, it is easily deduced that said edition was a very slow and rigid work.

But it was around 1980 that the video edition was leaving its old physical supports given these limitations, and entered its digital age, where editing and post-production meet for the first time. First, the linear system was replaced by the non-linear edition system, which became obsolete with the advent of digital edition.

Nonlinear editing, or random access nonlinear editing, refers to the system that can be extended or shortened an edit or mount sequence where no subsequent sequence alteration or damage is caused. It is called random access because it allows the editor to manipulate any part of the “raw” without having to go through if or if the previous sequences. This incredibly lightened the editing and post-production work.

This was further improved with the advent of computerization, where digital recordings and compression algorithms (thereby significantly reducing the volume of treatable data, also reducing the cost of storage units), made the difference and the start of Another great change, in which there were generations of non-linear editions, first based on video tapes, then on laser disks, and then several generations of magnetic disks were stopped, and it continued to evolve with what appeared. the MPEG algorithm or those capable of managing digital media. All of them with their pros and cons.

On the other hand, there is post-production, it is a concept that can be applied and used in those areas where audiovisual material is used (cinema, advertising, photography, audio, etc.). The digital effects (DFX or VFX), which must be specifically programmed in the technical script, the document in which the information necessary to execute each of the planes necessary to carry out the audiovisual work in question is dumped. DFXs differ from traditional FXs in that they are computer generated, while FXs are made during filming and do not require post production.

Some of the techniques that have emerged around this editing phase are:

• Compositing
• Matte painting
• Optical Flow
• Chroma Key
• Morphing 
• Wire removal
• Motion capture
• Etalonaje o Color grading

This is also how the softwares that help and allow us to carry out a more detailed work in this phase have emerged. Video post-production includes a series of processes related to the processing and editing of different shots of visual material.

Capturing, of course, is the first of these processes, and depending on the type of material available. Then comes editing and is followed by post-production, broadcast and / or publication.

Today there are many computer programs for video editing, they even mix their capabilities with each other to give a better result. For example, before one of the techniques used to recreate a non-existent creature was modeling it with certain materials.

Even filming miniatures and then creating a montage and giving the feeling of realism. Today all of this can be done even 100% by using 3D modeling software, such as Maya or 3D MAX.

It is customary to use programs such as After Effects for post-production, special effects. Its complexity varies, but editing and adding special effects has been greatly simplified, reduced and streamlined compared to previous years. Just as old problems are overcome, new problems arise, such as the hardware required to support these softwares, or to reduce rendering time as much as possible.

The truth is that it is not known what the future of video editing will be like, since now both editing and editing have come together thanks to computerization, and therefore there is no agreement among experts in the area about what things will hold the future. Some authors propose that it is not easy to forecast the next phases of, for example, cinema, given that it is in the midst of strong competition with other audiovisual media. According to Jorge Carrasco, it would be good if an identical edition were generated between cinema and television if both converged in the same format. They are enemies from the beginning, and that has made convergence somewhat difficult to achieve given their difference of interests.

Other experts, as well as many people in general, maintain a firm position that television is against the clock towards its own disappearance. Mobile devices are a main threat factor against television, since it is they that make a television completely lose its meaning and condemn its extinction given the versatility and much greater potential of mobile devices.

Read about our video services~