Viewing entries in
Audio Post Production

How The Disturbing Sound of ‘Midsommar’ Was Achieved

How The Disturbing Sound of ‘Midsommar’ Was Achieved

‘Midsommar’ is director Ari Aster’s (Hereditary) latest creepy film, but aside from its certain and deeply disturbing narrative, sound and audio are two pivotal aspects of the film’s overall spooky environment. 

An in-depth analysis of the film’s sound allows us to shed some light on those aspects where sound played a major role. For starters, the film’s audio and sound editor/re-recording mixer, Gene Park, helped achieve such a level of emotional tension and discomfort through sound. 

In fact, as with the director’s previous film, ‘Hereditary’, the movie starts off as no less than unsettling, but that feeling progressively evolves into full-time anxiety. The director’s precise and tasteful use of sound plays a vital role in the creation of such an uncomfortable atmosphere that drives the story into perturbation.

For ‘Midsommar’, Aster, Park and their team crafted and delivered a soundtrack that puts the audience right in the middle of this awkward commune’s uncanny and deadly festival. Unorthodox sonic choices, such as almost getting rid of all ambient nature sounds despite the vast presence of birds and bugs, makes the audience focus on the increasingly questionable and upsetting rituals.

This fact allows us to conclude that both the director and his sound editor wanted the film to be as immersive as possible, using the full range of the 5.1 surround setup. It seems as though they wanted to explore the boundaries in terms of what the audience is used to perceiving during films, and therefore, the sound effects and the film’s sound design are able to rise those specific emotions the movie wants to provoke for certain moments throughout its length.

But there’s more: after careful analysis, it is possible to identify that the sound team boldly used panning during several scenes, especially during the May Queen dance competition scene. You can see and feel that the music is panning discreetly around the theater, which essentially implies that the team spent a significant amount of time on panning, literally assessing frame by frame in order to match the camera moves.

Normally, sound professionals mark points of where the camera should be exactly in the spectrum, and in ‘Midsommar’ you can tell that during specific sequences, they have the music going around in circles following the camera on the talent, but moments after that they start adding in other parts of the score to make the environment as awkward and disorienting as possible.

Another bold element that can be found in the film sound-wise is when one of the main characters is on the phone in the very beginning and the sound apparently goes to mono; however, it is yet to be confirmed. What you can definitely tell is that the sound team wanted to use the full dynamic range in this movie, which is why several dialogue scenes feel like the sound team sucked out all sound elements, achieving a really quiet yet highly noticeable room tone which can be more felt than heard.

Taking the sound down that far is what ultimately allowed the movie to achieve its disturbing nature, and what ultimately gets the audience involved even more. In fact, this same technique can be felt and heard during that scene where they’re sitting around that tree and they start tripping. Essentially, the film apparently follows a simple rule: if it’s quieter out there, the sound team has more room for sounds and a more dynamic range.

Another fact that makes the film’s sound even more impressive is that the commune is right in the middle of nature, so you kind of expecting to hear bugs, birds, but during the vast majority of exterior scenes, there’s really no ambient sound. As mentioned above, such a decision is yet another bold move from the sound team, as by reducing ambient noise during key scenes they manage to make the audience focus on the main characters.

clouds field and flora.jpg

But how else did sound play a major role in the film’s narrative and the environment? A lot of the sounds that can be found in the film were recreated with foley, which is what allows sound professionals to bring in specific sounds for specific actions. 

These sounds contribute to the unsettling feeling by isolating the characters in a way —you can tell that in spite of the fact that everybody knows each other, everyone is doing their own thing. Judging by the film's overall sound, you can definitely tell that the director wanted to create the idea that the commune was a group of people who knew what they were doing, making visitors not know that they’re there for something they’re not aware of.

Finally, since the film was shot in Hungary near an airport, it is likely that several ADR rounds were carried out, as it is quite impossible to use the boom when there are several noises such as airplanes nearby. If so, the combination of ADR, Foley and lav mics is what ultimately augments the film’s narrative. 

*The images used on this post are taken from Pexels.com

‘Spider-Man: Into the Spider-Verse’ and The Eye-Catching Sound Elements

‘Spider-Man: Into the Spider-Verse’ and The Eye-Catching Sound Elements

‘Spider-man: Into the Spider-Verse’ wasn’t an ordinary Spiderman movie, not only for its unorthodox story but also for its remarkable and truly outstanding sound. The filmmakers took the story we’re all familiar with and turned it upside down a bit, letting the audience know that the character is more than just some random friendly guy who wears a mask in your neighborhood.

The film goes about a teenager from Brooklyn, Miles Morales, who is currently struggling with all the things a teenager lives in addition to the fact that he is, well, Spiderman. Audio played no less than a pivotal role in this film. Sony supervising sound editors, Geoff Rubay, and Curt Shulkey earlier this year won an MPSE Award for Outstanding Achievement in Sound Editing due to their work on the film. So, what’s so special about this particular movie sound-wise? Let’s find out.

Sound Elements

The movie is fun, clever, it definitely has an attitude and a defined style. It is also really energetic. The sound elements found in the movie are definitely stylistic and eye-catching as the moving images. They do a great job supporting not only the story but also each character. Given the fact that the movie is a mix of fantastic and realistic elements, right from the beginning, one can tell that sound elements make an effort to stay true to the incredible nature of the film’s visuals.

The sound editing team also did a great job in making the film feel believable despite the fact that its nature is rather unrealistic; however, through sound, the sound team managed to support the story whilst staying away from a common mistake: make all things sound awesome.

The Process

In an interview, Shulkey mentioned that the team assembled about 9.5 months before completion, which allowed them to get neck-deep involved in the sound design and sound creation processes from the very first studio screening, right on to the end of the final version. This way of working allowed both sound professionals and their teams to create sound and test sound ideas in advance of the images, or even to influence the development of pace and new imagery.

Given their level of involvement and commitment, the directors allowed them to speculate on what kinds of sounds might suit the upcoming moving images and scenes, which also allowed directors to mold their ideas in a much better way visually-wise.

Live-Action v.s. Animation

Traditionally, whenever sound professionals work in a live-action film, the vast majority of all of the imagery has already been shot even before they begin their work. Being this project an animated film, the sound editing team had more creative involvement and, of course, more time to test out new things and develop new ideas. When it comes to animated films, a high percentage of the project is still in storyboards by the time the sound team is brought to the table, which ultimately allows animators to adjust their work to fit the sounds that are being created As visual elements develop, sound professionals begin creating layers of sound to support the moving images.

Spider Man Customes.jpg

One of the best parts of developing sound for an animated movie is that not a single sound is imposed by the real world, unlike the vast majority of live-action projects, Normally, in a live-action film, if a dialogue scene is shot on a city street in San Francisco, there’s a lot of ambient noise such as traffic built into the dialogue lines.

A director’s main goal is to keep the spontaneity of the talent’s original performance. In animated films like ‘Spider-Man: Into the Spider-Verse’, the sound team didn’t have that problem —sound effects and ambiances were created without previous recording and environmental noises, making the film feel very organic and natural.

Working in animated movies provides sound and audio professionals with a bit more freedom, as there is no production track, and they can simply add new sounds and new layers of sound to the animations; however, more is not always better.

The Visuals

This film is an animated film with a rather unique visual style. At times, it seems as though the sound editing team played the effects straight as if they were working on a live-action motion picture; other times, they removed any notion of reality to set the tone and emphasize the realistic v.s. Non- realistic debate within the film.

There’s a lot of snapping sounds and hard angle turns, all of which support the story as they are really in close proximity with the action. Sound is what makes your eye turn or notice something special, and sound is also what further enhances the image when frames are expanded.

On a final note, both Shulkey and Rubay mentioned that they used ProTools for this movie, especially Pitch ‘n’ Time, Envy, Reverbs by Exponential Audio, and a sheer array of recording rigs and microphones of all sorts.

*The images used on this post are taken from Pexels.com

How to Create Horror Sound Effects That Are Truly Eerie

How to Create Horror Sound Effects That Are Truly Eerie

Horror films are all about sound. Creating that specific eerie atmosphere is only doable in the audio post-production stage. The easiest way to generate a dark soundscape is to start with, for example, a deep, lowpass-filtered synth sound, or a heavily pitch-shifted atmospheric sound. It is really interesting what can be achieved through sound and frequency manipulation for horror and suspense films.

Horror sounds can be directly recognized and even classified. Next, we will cover how the vast majority of these sounds are made and which techniques and elements you should use for your upcoming horror film project. From gore sound effects and the foley techniques associated with them to shocking sounds and their frequencies, this genre is full of room for experimentation.

Gore Sound Effects

Achieving or creating gore sounds is, to some extent, quite easy. In fact, although there are many great gore sound effects libraries out there, creating their own gore and splatter sound effect once in their career is something audio professionals should do at some point. 

There is a myriad of tools for creating the traditional bone-cracking and blood soaking sound effects, especially during the foley stage wherein foley artists, as mentioned in a previous article, always use vegetables and other kinds of food to create those sounds. 

Let’s talk about bone breaks, for instance. If your film has several scenes where people get their bones broken by some demonic entity, you really want to make it look, but most importantly sound, as real as possible. Traditionally, the sound associated with bone breaks are those whose nature is rather crunchy —celery or cabbage, for example, are just a couple examples that are widely used by foley artists across the world to get this type of sound.
Leaves or sticks also make a realistic bone-snapping sound when broken properly; twisting several sticks and leaves at once gets you a much more brutal type of sound, which is perfect for dismemberment sounds and crushing rib-cage sounds as well.

Behold: The Watermelon

For all blood splattering, blood dripping and spitting sounds, audio professionals prefer to go back to the realm of food, as it is the only place where they can find the number one go-to fruit for all these gory and crunchy sounds: the watermelon.
There is a sheer array of great hidden sounds in watermelons: you can crunch, cut and rip its peel apart, which is commonly used for dismemberment and body-opening sounds. Additionally, dropping a watermelon from a considerable height makes it explode once it hits the ground, which is perfect for exploding heads and bodies.

Playing with the actual fruit once it is open is another great source of sounds, more specifically splattering sounds: by grabbing, whirling or simply punching right in it, you can achieve outstanding blood splattering sounds that will make your project feel and sound eerier and brutal.

When the zombies became popular, sound professionals and foley artists had to step up their game if they wanted their projects to truly depict what was going on in the moving images, mostly composed about zombies chewing people’s flesh.

Biting and chewing the aforementioned foods and apples, cereals and even pudding is also a great way to achieve these types of sounds. Flesh-eating zombies and monsters sound like celery being chewed, and squeezing tomatoes makes great blood-spurting wounds and arteries.

Having said that, gore sounds, as mentioned earlier, are, to some extent, much easier to achieve, which is why there’s a lot of room for experimentation.

Ghostly Atmospheres

When it comes to ghostly ambiances and atmospheres, we need to leave the food realm for a bit and then start talking about actual audio post-production, as the only way to achieve this type of sound is through frequency manipulation, and low mid and high frequencies are what we’re after.

horror movies sound effects.jpg

In the editing stage, there is so much that can be done to alter and customize natural sounds like wood and door creaks, metal clanging, and more to make them sound instantly eerie and surreal. First, start by reversing and pitch shifting those sounds, and then start applying reverb and delay to give them that spooky and phantasmagoric touch.

Scary and Shocking Sounds

Jump scares are another element in horror. What works best for this type of reaction are those sounds that are progressively approaching, loud, high frequency and, in some cases, distorted —especially if the scene before has a more calm and quiet atmosphere. This, alongside with haunting sounds like dissonant strings, screams or added sub-bass effects will truly scare people. 

Metal hits or any kind of drum for the impact is what really triggers this reaction, and using reverb or any other sustaining sound for the tail is what will keep the emotion going a bit longer.

*The images used on this post are taken from Pexels.com

‘First Man’: The Moon, The Lion, and The Snake

‘First Man’: The Moon, The Lion, and The Snake

During this year’s Academy Award ceremony, the nominees for best sound editing were: ‘First Man’, ‘Bohemian Rhapsody’, ‘Black Panther’, ‘A Quiet Place’ and ‘Roma’. The award, as we already know, went to ‘Bohemian Rhapsody’; however, amongst the other nominees, there are a bunch of sound editing stories worthy of being told. Such is the case of Damien Chazelle’s First Man’the story of NASA's mission to land on the moon, focusing on Neil Armstrong and the years 1961-1969.

For their work on Chazelle’s La La Land, sound editors Mildred Morgan and Ai-Ling became the first female sound editing duo nominated for an Academy Award, and for their work on this year’s ‘First Man’ both were nominated again.

The sound played a major role in providing the film with its intimate feel. It delivers a window into Neil Armstrong’s family life, and a no less than an intimate sense of the perils associated with space travel. For both sound editors, the film gave them unique hurdles and challenges.

Telling a Story Through Sound

One of the most interesting things about this film, according to both sound editors, was that the director provided them at the very beginning with several animated sequences showing specific parts of the film so they could have an idea of what he had in mind in regards to sound. 

Those sequences included several key scenes from the film such as the opening scene with the X-15 and the Gemini 8 launch, which, if you watched the movie, were from the perspective of the astronauts, ignoring what was outside. Thus, the sound had to show the audience not only what they were watching, but also what they were supposed to feel. 

In order to achieve that both Ling and Morgan focused on building an arc where sound goes from using real and authentic sounds such as rocket launches and turbulences (which are recorded directly from simulator rides), to using non-related, even abstract, sounds like animal sounds such as lion roars and snake hisses that were subsequently pitched and manipulated to help develop the sense of danger and anxiety felt by the crew.

Space V.S. Earth

When it came to dialogue, Ai Ling and Mildred Morgan used a different approach for all earth-bound scenes. The film can be described as a quiet, intimate and even personal film. It is like if you were watching a film’s enthusiast first audiovisual project. It was definitely hard to match the texture with what the moving images are showing, giving it a rather unpolished feel.

Normally, when sound editors edit dialogue on an audiovisual project, their initial mission is to clean it up as much as they can so people just hear dialogue lines (and not artifacts or any other sound of some sort). In ‘First Man’, the director clearly wanted the earth scenes and all family scenes to sound like a documentary, which is why you can easily spot the difference.

The Apollo 11 Launch Scene

The sound editing team mentioned in several interviews that during pre-production, one of the things that were matter of discussion was the Apollo 11 launch. In fact, it is said that Neil Armstrong’s sons, who actually witnessed the launch back in the 60s, met several times with director Damien Chazelle and told him that they had not felt nor seen the sound of the rocket properly recreated in previous films, which ended up being a huge task for the sound editing team as they were responsible for coming up with something as close as the original launch.

Ling and Morgan went through all the NASA archives in hope of finding a recording of the launch, but because of how long ago the launch was the audio they came across with wasn’t as good as they expected. The team, then, reached out to SpaceX, and they were lucky enough to attend the launch of the Falcon Heavy. They were allowed to set several microphones directly on the launchpad, a quarter-mile and three miles away, to capture the different layers and characteristics of that sound at different distances.

During the sound editing process, the duo ended up mixing some of the real and actual crackles of the Apollo 11 launch from the NASA archives alongside with their recordings.

The moon.jpg

The Biggest Challenge: Neil Armstrong and Ryan Gosling

As per one interview, both Morgan and Ling asserted that their biggest challenge of ‘First Man’ was dealing with the astronaut's famous words upon landing on the moon: “That’s one small step for man, one giant leap for mankind.” The team was presented with Ryan Gosling’s performance of that line, and they were actually really similar to Armstrong’s; however, the film crew wanted them to be next to the point where they sounded like it could be the real Neil Armstrong in 1969.

The team spent hours working on it: making sure Gosling’s rhythm was exactly the same as Armstrong’s, blending in some of the original static, and pitching some of Gosling’s words and syllables so the could sound as close as the director wanted.

*The images used on this post are taken from Pexels.com

Sound Behind the Scenes: Interstellar

Sound Behind the Scenes: Interstellar

Interstellar is definitely one of Christopher Nolan’s most adventurous and creative pieces of work, and when it comes to sound, the approach for such an experimental film was not an exception.

During an interview with The Hollywood Reporter back in 2014, the director described how he prefers to approach sound for his films. Speaking in detail, Christopher Nolan decided to approach this area in a highly impressionistic way, which is definitely a quite unorthodox approach for a mainstream blockbuster such as Interstellar; however, 5 years after its debut, we can assert that it was the perfect approach for an experimental film.

His approach was creative and audacious, Nolan said. And if we were to take a further look into how the film’s sound was developed, we could assert that, compared to other filmmakers who have approached sound in a rather bold way, Nolan did a great job. 

space background.jpg

In previous articles we’ve mentioned the importance of sound when it comes to storytelling —many people, especially sound professionals and directors with a vast knowledge of sound distance themselves from the idea that you can only achieve clarity through dialogue, especially the clarity of emotions and stories. And that is a really important takeaway.

If directors really were to try to make the most out of sound and sound elements, they would end up trying to achieve that in a more holistic, almost layered, way, using all the different elements at their disposal: moving images and sound.

You will probably remember some viewers complaining about the movie’s sound after its premiere on Nov 5th in 2014, claiming that they were unable to properly hear some key dialogue lines, which led to a myriad of conversations about whether it was the fault of the sound mix or the sound systems in some theaters across the world where the film was played. Nolan took a step forward and addressed all of these questions directly —he said the movie’s sound was exactly as he initially envisioned it and even praised theaters for presenting it properly.

Aside from his tremendous work, Nolan is also renowned for being a passionate believer that sound is as important as the moving images, which is why he is fond of hearing how his projects sound in actual theaters. During the same interview, Nolan also said he traditionally visits up to seven different theaters across the world just to see how the movie’s sound is performing.

As mentioned earlier, Interstellar caused people to question whether the film’s sound was right. Essentially, when it comes to films like these, it is possible to mix sound in an unconventional way as they did. Of course, that can catch some individuals off guard, but people, in general, will appreciate the experience, which is what happened with Interstellar in the subsequent weeks after it premiered.

The Team Behind the Sounds

The movie’s sound was initially attributed to a very tight teamwork amongst German composer Hans Zimmer, mixers Gary Rizzo and Gregg Landaker and sound designer Richard King. According to the director himself, they made cautiously considered creative decisions —the movie is full of surprises sound-wise. In fact, there are several moments throughout the film where Nolan decided to use dialogue as a sound effect, which is why from time to time it is mixed slightly beneath the other sound effect tracks and sound effect elements to emphasize how loud the encompassing sound actually is.

As an example, if you recall the film, there’s this scene during which Matthew McConaughey is driving through the cornfield, which is also extremely loud and, to some extent, frightening —considering that Nolan himself was riding in the back of the car while filming point of view shots—. Nolan wanted the audience to experience first hand how chaotic such situation was by making them feel all the turbulence that was going on through sound.

Another example is when they are in the cockpit and you hear the creaking of the spacecraft. That’s actually a very scary sound, and it was loud enough for people to get immersed into the story and actually feel what space travel might be like. It was definitely all about emphasizing intimate elements. 

The movie is definitely a case study on its own. Nolan also described that sound designer, Richard King, managed to get high-quality sounds inside the truck during the scene mentioned above; however, he decided to echo them later in the film, with one of the key spacecraft scenes, in hopes of making it more similar and truthful to what astronauts experience and hear in real life.

space capsule.jpg

Nolan also resorter to other elements to describe the different planets the protagonists visit throughout the film, not just with moving images, but with sound as well. Nolan stayed away from the traditional layering of sound elements and chose to delineate the planes based on recognizable sounds —the water planet is a lot of splashing in contrast to the ice planet, which has the crunchy sound of ice glaciers.

*The images used on this post are taken from Pexels.com

How Sound Design Works: From Early Editing Stages to The Final Mix

How Sound Design Works: From Early Editing Stages to The Final Mix

Sound is actually half the picture, so it’s just as pivotal to pay attention to the sound editing phase whilst working on an audiovisual project.

One of the most important aspects a sound professional needs to focus on whilst editing a scene is trying to clean up the dialogue as much as possible by applying an EQ filter. As mentioned in other blog posts, there are many techniques that can help audio and sound professionals make the most out of the tracks they’re given during the post-production stage.

For example, if the tracks you’re working on happen to have a lot of hum in the background, you could tone down the low-shelf frequency and subsequently adjust the mid-shelf to turn the dialogue into a more crispy version of the one you received earlier. Additionally, use volume key-framing for achieving a more granular and detailed work and for reducing the number of sound distractions between one still and the next.

n short: the whole idea behind the sound post-production process is to create the perfect environment for the story that is being told through the moving images.

Once at least one sequence of moving images has been taken care of, begin to develop the storytelling by adding some atmospheric sounds and sound effects. Of course, this step highly depends on the nature of the story you’re working on, but normally the aforementioned advice can always be applied to some extent.

Atmospheric sound and sound effects will help you smooth out every cut, and also it can definitely improve timing by compensating for what seems to be lacking in the motion picture. At this stage, is normally a good idea to alternate between working with the picture crew editing the moving images and sound until they meet somewhere in the middle of what the director is expecting.

Pro tip: always pay special attention to the overall pace of the moving images. Remember sound and audio are a means to an end —creating, enhancing and improving the storytelling will achieve a seamless transition between scenes.

But how to create the perfect atmosphere and environment for a film? Many audio and sound professionals have an extensive and massive sound effects library, and they always experiment with these effects looking for the perfect atmosphere and the perfect environment with one single purpose in mind: create a world off the back of the moving images and the film.

Depending on the nature of every film, you can either create your own signature sounds or resort to your library; however, everything ultimately depends on what the director wants. If he or she wants a story that is set in an arid environment with no animals, trees, insects or other living species besides human beings, then as a sound professional you will need to create such atmosphere by including, for example, the sounds of different winds and sea waves to develop a compelling environment.

sound atmosphere.jpg

Think of the sea slowly getting louder and the wind gradually gaining speed. That, for example, could be used to add more intensity to a specific scene, and the audience will perceive it as credible and will be guided by their own emotions —which is ultimately the main goal.

Also, if the project includes a soundtrack, you need to be extra careful. Audio professionals traditionally wait until they have finished the cut in its entirety before incorporating music to it. Let’s take a much closer look: if a scene’s pace is adequate, then the music will fit in just right; however, you cannot compensate for inadequate pacing or simply try to create a new one from scratch simply by adding music tracks —either way, the result is far from being what the film requires.

Another key area of sound and sound design is sound effects. Normally, before having the final mix, sound designers use all the sound effects placed during the editing state as a reference guide.

*The images used on this post are taken from Pexels.com

A Practical Guide To Understanding The Audio Post-Production Process - Part 1

A Practical Guide To Understanding The Audio Post-Production Process - Part 1

When talking about audio post-production, one cannot avoid one simple question: what’s the typical process most audio professionals go through when working either with a sound house or by themselves? In fact: is there a predetermined process? Everyone seems to work a little differently; however, there are certainly common steps and some common ground that you, as an audio enthusiast or audio professional, will encounter when working in an audiovisual project.

Finding Your Location

Today, any film’s soundtrack can be get done and mixed in all sorts of places and different facilities. In fact, would it be too much to say that you could even have someone do the whole sound work if you were the owner of a simple sound facility with a fistful of rooms and editors? The short answer would be: it depends.

There are many creative people working in this industry who started off using their own bedroom as their main editing and mix room. If the project you’re currently working on doesn’t require a surround sound mix and is simply going to be broadcasted and heard online, then almost everyone with a basic setup and some experience would be able to take care of your needs.

Likewise, if your documentary is about the Second World War and you’re going to hear bullets passing over your head and missiles blowing up war fields, then you’re going to need a much larger and professional room.

The process of how to find the right sound facility isn’t different from finding people to work with. In this industry, aside from the final pieces of work, which you can use to judge whether someone has skills, word-of-mouth seems to be the best source of candidates. If you’re seeking to gather a team together, ask around. If you remember an audiovisual project where the sound was definitely special, find out who was in charge.

When looking for sound professionals or an audio post-production studio, it is key to find a studio or group of people well known for their creative capabilities. Some might even say that there isn’t too much thought behind the sounds that are required for a film: “If you see a car, make sure I hear the engine once in a while.”

Working with a professional studio and a creative team will take your project to the next level, perhaps questioning the need for the engine sound, as described above, knowing that not hearing it will create another layer within the storytelling. It’s the job of the audio professional and mixer to present you with ideas and different approaches when appropriate. It should be part of a seamless chain of events and discussions and never an argument or a struggle.

In the long term, as a producer or executive, if you want an engine sound you should get your engine sound, but your audio professional might ask you what kind of emotion you’re looking for. Your first contact with a studio should lead to a conversation about the storytelling, the story itself, style, and sound needs for your project. The studio should also look at a fine cut of your film. That way they will be able to anticipate how much work is going to be required to do edit all the sounds, create the sound effects, foley, etc.

This part of the process is mostly known for raising artistic and practical questions that will certainly help the studio, the team, and you envision a much clearer version of the final cut. For instance, do you want to hear the sound effects under silent footage? Your main audio professional should provide you with a sense of the quality of your production sound in order to determine where the budget should go.

Making The Most Out Of Your Budget

Budget, that famous and no less crucial word. As audio professionals, we always try to find out how much the filmmaking team has set aside for both sound prep and mix. This is merely done in hopes of developing a plan that fits your needs as a filmmaker. It the budget isn’t close to what a traditional job would cost, taking into account the project’s needs and running time, then it’s best to state that from the very beginning.

Budget for sound design

Money is, of course, an important factor to ponder and consider, which is why knowing beforehand how much the filmmaking team is able to invest is crucial for saving time. If, as a filmmaker, you’d rather not say what your budget is, normally a studio can quote a wide price range and explain the scope of each version of such a quote. Once the budget issues have been taken care of, the next step is to establish and get the project into the audio post pipeline.

*The images used on this post are taken from Pexels.com

The Importance Of Mastering

The Importance Of Mastering

The whole idea behind the mastering stage throughout the whole audio and sound post-production process is to make audio sound the best it can across all platforms. Music, just to cite an example, has never been used or, rather, consumed across more platforms, formats, and devices than as of the last decade.

In fact, even if you’re an audio enthusiast recording and mixing at a million dollar studio, or working with different soundtracks at your very own home studio, you will always need the final quality seal of approval of the mastering stage. Thus, your resulting sound will be heard the way you, or your client, first envisioned it. A well carried out mastering job makes sound and audio consistent and balance. Without this pivotal part of the audio post-production process, individual tracks might feel disjointed in relation to each other.

The Difference Between Mastering And Mixing

Although mixing and mastering do share certain similarities, techniques, and tools, both processes are often mistaken and confused, as they are, in reality, different. The mixing process traditionally refers to what in the audio and sound post-production industry is called a “multi-track recording”, whereas mastering is the final touch; it refers to that final polish audio professionals apply to the whole mix. Let’s take a close look:

Mixing

Mixing, as mentioned above, is all about getting all tracks and audio elements to work with each other. If we were talking about mixing a record, the mixing part would be getting individual instruments and voices to work as a song. It’s, essentially, making sure everything is in place.

Once the audio professional deems they have a good mix, it should then easily flow into the mastering process.

Mastering

Now that we have given the mixing process its proper context, think of mastering as the final touch. In fact, there’s no better analogy than to think about both mixing and mastering as a car: mixing would mean getting all the parts working together, and mastering would mean getting the best car wash ever. You certainly want your new car to look as shiny, slick and cool as possible.

Mastering takes a closer look at everything in the mix and makes it sound as it is supposed to sound.

To provide a bit of history on the mastering process, it is worth mentioning that in 1948, for example, the first mastering engineers were born amidst the birth of the magnetic tape recorder. Before this, there was practically no master copy, as records used to be recorded directly on 10-inch vinyl.

In 1957, the stereo vinyl came out. Mastering engineers started to think of ways to make all records sound a bit louder. At that time, loudness was an essential factor for better radio playback and, of course, much higher record sales. This marked the birth of the well-known loudness wars that still go on in actuality.

Fast forward 30 years. In 1982 the compact disc brought a total revolution for the mastering process. Vinyl masters were totally replaced by the digital era, although many of those analog tools remained the same. Nonetheless, that finally changed in 1989 when the first digital audio workstation (DAW) and the first mastering software appeared, offering a high-end, no less than mind-blowing, alternative to the mastering process.

How Is Mastering Carried Out?

Mastering as a sub-phase of the audio and sound post-production process has its own complexity. Here are some of the most traditional techniques involved:

Audio Restoration

First, a mastering engineer or audio professional fixes any possible alterations in the original mix, like unwanted noises, clicks, pops or hiccups. It also helps to correct small mistakes or alterations that can be noticed when the un-mastered mix is amplified.

Stereo Enhancement

This technique is responsible for dealing with the spatial balance (left to right) of the audio being mastered. When done right, stereo enhancements allows audio professionals to widen the mix, which ultimately allows it to sound bigger and better. The stereo enhancement also helps to tighten the center image by focusing the low-end.

stereo enhancement.jpg

Equalization (EQ)

Equalization or EQing takes care of all spectral imbalances and improves all those elements that are intended to stand out once the mix is amplified. An ideal master is, of course, well-balanced and proportional. This means that no specific frequency range is going to be left sticking out. A well-balanced piece of audio is supposed to sound right and good on any platform or system.

Compression

Compression allows audio professionals and mastering engineers to correct and improve the dynamic range of the mix, keeping louder signals in check while making quieter parts stand out a little bit more. This allows the mix to reach the required level of uniformity.

Loudness

The last stage in the whole mastering process is normally using a special type of compressor called a limiter. This allows audio professionals to set appropriate overall loudness and create a peak ceiling, avoiding any possible clipping that otherwise would lead to distortion.

*The images used on this post are taken from Pexels.com

How Warner Brothers ended up establishing the sound for the film industry

How Warner Brothers ended up establishing the sound for the film industry

The sound industry was established after no less than a curious chain of events. Back in 1919 three German inventors, Josef Engl, Joseph Massole, and Hans Vogt, patented the tri-ergon process. A process capable of transforming audio waves into electricity. It was initially used to imprint those waves into films strips that, when played back, a light would shine through the audio strip, converting the light back into electricity and then into sound.

The real issue in all this, however, was the amplification of the sound. which would be tackled by an American inventor who played a pivotal role in the development of radio broadcast, Dr. Lee de Forest. In 1906, de Forest invented and subsequently patented a device called the audion tube, —an electronic device capable of taking a small signal and amplifying it. The audion tube was a key piece of technology for radio broadcast and long-distance telephones.

In 1919, de Forest’s started to pay special attention to motion pictures. He realized his audion tube could help films attain a much better degree of amplification. Three years later, specifically in 1922, de Forest took a gamble and designed his own system. He then opened up the ‘De Forest Phonofilm Company’ to produce a series of short sound films in New York City. The impact of his technology was well received, and by the middle of 1924, 34 theaters in the American East Coast had been wired for his sound system.

The fact that a considerable amount of theatres in the East Coast had acquired De Forest system didn’t pick the interest of Hollywood. He had indeed offered the technology to industry leaders like Carl Laemmle of Universal Pictures and Adolf Zukor of Paramount PIctures; however, they initially saw no reason to complicate the solid and profitable film business by adding other features as frivolous as sound. But one studio took a gamble: Warner Brothers.

Vitaphone

Vitaphone was a sound-on-disk technology created and patented by Western Electric and Bell Telephone Labs they used a series of 33 and ⅓ rpm disks. When company officials attempted to get Hollywood’s attention in 1925, they faced the same attitude of disinterest that de Forest had, except for one slightly minor studio: Warner Brothers Pictures.

Courtesy of  Richie Diesterheft  at Flickr.com

Courtesy of Richie Diesterheft at Flickr.com

In April of 1926 Warner Brothers. decided to establish the Vitaphone Corporation with the financial aid of Goldman Sachs, leasing the disk technology from Western Electric for the sum of US $800,000. In the beginning, they wanted to sub-lease it to other studios in hopes of expanding the business.

The studio, Warner Brothers. never imagined this technology as a tool to produce and create talking pictures. Instead, they saw it as a tool synchronize musical scores for their own films. In order to showcase their new acquisition and the feature they had managed to add to their films, Warner Brothers launched a massive US $3,000,000 premiere in the Warner’s Theatre in New York City on August 6, 1926.

The feature film of this premiere was ‘Don Juan’. An amazing musical score performed by the New York Philharmonic accompanied the film, and the whole project was an outstanding success; some critics even went on to praise it as the eighth wonder of the world, which ultimately led the studio to project the film in several American major cities.

However, and despite the tremendous success, industry moguls weren’t too sure about spending money on developing the sound for the film industry. The entire economic structure of the film industry would necessarily have to be altered in order for it to adopt sound —new sound studios would have to be built, new expensive recording equipment would have to be installed, theatres would have to be wired for sound, and a standard sound system process would have to be defined.

Additionally, foreign sales would suffer a drastic drop. At that time, silent films were easily sold overseas. Dialogues, however, was a different story. Dubbing a foreign language was still conceived as a project that would take place in the near future. If studios were to adopt sound, it would also affect musicians who found employment in the movie theatres, as they would have to be laid off. For all these reasons Hollywood basically hoped that sound would be a simple passing novelty, but five major studios decided to take action.

MGM, Paramount, Universal, and Producers Distributing Corporation signed an agreement called The Big Five Agreement. They all agreed to adopt and develop a single sound system if one of the several attempts that were taking place alongside the Vitaphone should come to fruition. Meanwhile, Warner Brothers didn’t halt on their Vitaphone investments.

Courtesy of  Kathy Kimpel  at Flickr.com

Courtesy of Kathy Kimpel at Flickr.com

They announced that all of their 1927 pictures would be recorded and produced with a synchronized musical score. Finally, in April 1927, they built the first sound studio in the world. In May, production would begin on a film that would cement sound’s place in cinema: The Jazz Singer.

Originally ‘The Jazz Singer’ was supposed to be a silent film with a synchronized Vitaphone musical score, but the protagonist, Al Jolson, improvised some lines halfway into the movie. Lines that were recorded and could be heard by the audience. Warner Brothers. liked it and let them in. The impact of having spoken lines, however, was enormous —it marked the birth of what we know today as the sound for the film industry.

Oscar for Best Sound Mixing and Editing Explained

Oscar for Best Sound Mixing and Editing Explained

In this article, we’re going to be looking at perhaps two of the most confusing Oscars categories: Sound Mixing and Sound Editing. If you’re not familiar with the sound and audio post-production landscape, these categories might seem exactly the same thing; however, there are certain differences, and that’s why we often see a movie nominated for both.

The big thing to think about what’s sound editing and sound mixing is that sound editing refers to the recording of all audio except for music. And what’s audio without music? Dialogues between characters, the sound picked up in whatever scenario a scene was recorded at, and, also, sound recorded in the studio, for example, ADR, extra lines of dialogue, all those crazy sound created to mimic, for example, animals, vehicles, environmental noises, the foley, etc.

Sound mixing, on the other hand, is balancing all the sound in the film or the movie. Imagine taking all of the music, all of the audio, all of the dialogue lines, all the sound effects, the sounds going around, etc., and combining them together so they are perceived as balanced and beautiful tracks.

Some people refer to this last category as an ‘audio tiramisu’, as there are layers and layers of sound that, in the end, compose a beautiful orchestrated group of sounds. Layers of what’s happening in a film’s particular scene and the real realm and layers of what’s happening around it, like in the spiritual realm.

If you recall The Revenant, the American semi-biographical epic western film directed by Alejandro G. Iñárritu that was nominated for several Academy Awards categories including both sound editing and sound mixing, the exemplification of the film’s sound being a total ‘audio tiramisu’ is more noticeable. In the revenant, the sound was so perfectly crafted that it was like if two different stories were taking place at the same time side by side, and you could only distinguish between them by listening.

When it comes to sound editing, take for example another movie, Mad Max: Fury Road, the 2015 post-apocalyptic action film co-written, produced, and directed by George Miller. The movie contains all of these amazing and great recordings of cars, fire, explosions, the really subtle dialogue, which ultimately creates so much contrast between the action and what the characters were really saying. Max, played by Tom Hardy, was actually really quiet, whereas Imperator Furiosa, played by Charlize Theron, was screaming at the top of her lungs, and all of that happened in the middle of the most frenetic action possible. All the audio was used and mixed at the same time.

Having used and mixed the audio at the same time was, in reality, a huge achievement. Rumor has it they used up to 2,000 different channels, meaning they used 2,000 different audio pieces at one time, which is perfectly recognizable at the opening car chase sequence, allowing you to perceive how much sound was being used. The movie, in the end, managed to mix all the dialogue, the quiet dialogue, the effects, the action, the environmental sounds, etc., and to use it all together.

The Process Deconstructed

The relationship between sound mixing, sound mixing and storytelling, however, is perhaps the cornerstone of the whole audio post-production process. How audio design and sound mixing can be used to help storytelling, specifically in the films, is the main question that audio technicians strive to answer.

movie making.jpg

First, they approach both practices thinking how they can make the tracks sound better, and then how they can add to the story —make the audio tell the story, even if you don’t specifically see what’s going on. In terms of sound design, the whole idea behind this creative process is coming up with key takeaways regarding what is the purpose of the scene, or whether or not there are specific things that don’t appear in the moving images but still are ‘there’ and need to be told.

After having analyzed the scenes in terms of what can be done to improve the general storytelling, audio technicians start to balance the dialogues track by track, which is, of course, a process that takes several hours. Is it necessary to add the room tone? Is it necessary to remove it? Those type of questions normally arise during this part of the process. Afterward, the EQ part starts.

The EQ is normally that part of the process where audio technicians do a little bit of clean up by changing the frequencies of the sounds the audience will hear in order for them to hear them clearer and better. This is important in terms of the storytelling because by using an equalizer, audio technicians can add textures to the voice and the sounds people will hear, which is of course what the whole storytelling is about.

*The images used on this post are taken from Pexels.com


The Sound of An Oscar Nominee: A Star Is Born

The Sound of An Oscar Nominee: A Star Is Born

Have you ever wondered what it takes to craft a compelling sound? What techniques and technologies behind sound have been used for sound professionals to hit the spotlight and be recognized by the industry? Now that The Oscars are around the corner, a lot of conversations start to arise, especially about the nominees.

In this installment, we’re gonna go through the sound of A Star Is Born, as the movie has been nominated for best sound mixing. Steve Morrow, who later offered some behind-the-scenes insights at recording Lady Gaga and Bradley Cooper, was responsible alongside Tom Ozanich, Dean Zupancic, Jason Ruder for this part of the audio post-production process.

In a recent interview, sound mixer Steve Morrow said that both Gaga and Cooper wanted the film to have a particular style of sound: they wanted it to sound as if it was a live concert, which makes sense given Morrow’s experience in shooting at live concert venues like the Glastonbury festival; however, the request really ended up posing a real challenge: “In Glastonbury, we all went in there believing we had almost eight minutes to shoot, but we later found out the festival was actually running late so they only gave us like three minutes,” Morrow said.

The sound mixing crew asserted later on that the idea was to film three songs, but given those circumstances, they decided to play 30 seconds of each of those songs. As for the sound mixing process, Morrow also mentioned that the idea at the very beginning of the process was to capture all sounds live, all the performances, all the singing, etc., which ultimately ended up in a Lady Gaga mini show, as the music wasn’t amplified in the recording room.

Such conditions led Morrow to assert that his role on A Star Is Born differed a bit from a more typical production. On a normal set, it is the production’s responsibility to record lines of dialogue while filming all environmental or sound effects that would be happening at the same time during the filming process. During A Star Is Born, Morrow and the rest of the sound mixing crew had to do all that process whilst also recording the band and the live singing, making sure they had captured all the tracks.

After that, the team would hand those tracks to the editorial and the post-production crew. Sound people would then take all that information, mix it down accordingly, and that’s practically what you hear in the film. Nothing else.

As for the tricky part of the film, filming the live concert, Morrow took a rather uncanny approach to get those tracks. In the movie, the sound crew had to film twice at a real concert: Stagecoach and Glastonbury. The crew had to take advantage of the time between acts, and as soon as Willie Nelson was expecting his curtain call to come on stage, Morrow and the crew make the most out of the eight minutes they initially had to get the tracks.

Image from http://www.astarisbornmovie.net/#/Gallery/

Image from http://www.astarisbornmovie.net/#/Gallery/

What they would do, according to the mixing crew, which was ultimately different from all the other recordings they carried out in controlled spaces, is they would approach the monitor guy with some equipment and take a feed from the monitor through the mic Bradley Cooper was supposed to use.

Most of the time, they would do a playback of the band through the wedge —the small speakers a performer standing in front in live presentations. Morrow and the rest of the mixing crew would then put those playback tracks through so that Bradley Cooper could hear them, but the crowd couldn’t as they were standing far enough away from those speakers. So, in a nutshell, what they did to record the live concert scenes was to have Bradley Cooper singing live whilst hearing a playback of the instruments through the wedges.

An additional challenge was making sure not to amplify any of those tracks and performances, as Warner Bros. didn’t want the music to be heard by the crow in order not to risk losing impact. Such demands forced the mixing crew to mute practically everything as much as they could, which was also different from the way film producers film in different and controlled locations.

The fact of having a big crown in front makes the process way more challenging: the whole crew, film, picture, sound, etc., only have a few minutes to shoot, which increases the chances of not getting a lean and clean sound. In controlled scenarios, sound crew normally record up to ten different tracks, whereas in front of a live audience, they would need not only to prevent tracks from being heard but also to record the live audience for the desired effect.

Dialogue Editing and ADR With Gwen Whittle

Dialogue Editing and ADR With Gwen Whittle

If you recall the movies Tron Legacy and Avatar, they both, aside from having received Oscar nominations, have one name in common: Gwen Whittle. Gwen is perhaps one of the top supervising sound editors working today, which is why a lot can be learned from her work.

Gwen also did the sound supervision for both Tomorrowland (starring George Clooney and Hugh Laurie) and Jurassic World (starring Chris Pratt), and although she’s known for overseeing the whole sound editing process, she’s mentioned in several interviews that she’s highly fond of paying special attention to both dialogue editing and ADR sessions, as mentioned in previous articles by Enhanced Media in our blog.

Dialogue editing, as mentioned by George Lucas back in 1999 just before Star Wars: Episode 1 hit the theaters, is a crucial part of the whole sound editing landscape, and, apparently, even within this industry, nobody pays enough attention to it. In fact: dialogue editing is the most important part of the process.

So, what’s dialogue editing?

Dialogue editing, if it’s done really well, is, according to Gwen Whittle, unnoticeable —it’s completely invisible, it should not take you out of the movie, and you should pay no attention to it. Imagine taking all the sound from the set, take by take, just to take a much closer look at the dialogues captured for a specific scene.

Of course, not all dialogues recorded on the set sound the same —maybe the take was great, the acting was great, the light was great, but suddenly a truck was pulling over and an airplane happened to fly over the crew. It’s practically impossible to recreate that take as there are many aspects involved: air changes, foreign sounds, etc., and no matter how much you try to remove all those background noises, sometimes you need to resort to the ADR stage. In an ADR session, it all comes down to trying to recreate the same conditions that should apply to that particular scene.

Cutting dialogue often poses several challenges to sound editors, and it highly depends a lot on the picture department. A dialogue editor receives all the production from the picture department, everything that was originally shot on set, making sure that each mic has its own track. It’s the responsibility of the picture department to isolate each mic with its own track so dialogue editors can do their magic.

On set, the production sound mixer is recording anywhere from one microphone up to eight, usually, sometimes more, but the idea is for each actor to have their own mic and at least one or two booms. All this mix is passed onto the dialogue editing crew, isolating each track, matching the moving images just like the movie is supposed to be.

Once the dialogue editing crew has received the tracks, they listen to them and assess which parts can be used and which parts need to be recreated, organizing which tracks will make it to the next stage. Sometimes, since dialogues can be recorded using two different microphones such as the boom and the talent’s personal mic, sound editors can play with both tracks trying to make the most out of it whilst spotting which parts require an additional ADR session.

If there’s a noticeable sound, like a beep, behind someone’s voice, a dialogue editor can really get rid of that in case they need to; however, that’s not always the case. ADR sessions are quite familiar with the sound editing process. In films with a smaller budget, the dialogue process gets a bit trickier, since normally all tracks aren’t passed isolated onto the dialogue editing crew, so they need to tackle any hurdle in their tracks. Low budget films normally include more dialogue as they don’t have the resources to either afford fancy sets or include fancy visual and sound effects.

Do directors hate ADR?

Well, according to Gwen Whittle, not many directors are fond of ADR. David Fincher, for example, is. ADR is a tool. A powerful tool. And if you’re not afraid to use it, you can really elevate your film because it takes away the things that are distracting you from what’s going on.

ADR and dialogues.jpeg

Actors and actresses like Meryl Streep love ADR sessions because is another chance to perform what they just did on set. They see ADR as the opportunity go in there and try to put a different color to it, and it’s another way to approach what the picture crew just got on a couple of takes on set. Many things can be fixed, and even alter several lines. You can add a different twist to something. In fact, even by adding a breath to something, you can change the nature of a performance. It’s the opportunity for both the talent and directors to hear what they really want to hear.

*The images used on this post are taken from Pexels.com

4 Services That Allow Audio Post-Production Collaboration Seamless

4 Services That Allow Audio Post-Production Collaboration Seamless

Collaboration is not foreign when it comes to audio post-production. In fact, it is what gives studios constructive feedback, ideas, solutions and different perspectives to work on altogether, helping all parties involved produce better pieces of work.

Audio, sound, and video collaboration happens all the time. When it comes to audio and sound, for instance, it has never been so plausible to write a song with another individual on the other side of the world or to hire a full orchestra or session musicians to record music for the score and original soundtrack purposes.

In this post, we address some services and other software that make the whole collaboration workflow much easier, but more importantly, productive.

The Audio Hunt

The Audio Hunt is best known for being an online collaboration platform where hundreds of studio owners and audio professionals make their gear available for other colleagues to run their tracks through. How does it work? Imagine you want to run your mix through a specific piece of equipment/software. You will then be required to, first, open a account, find the piece of hardware you want to use, start a chat with the vendor, book the job depending on the fare (fares and fees vary depending on what type of hardware/software you want to use), and, finally, wait for the service to be completed so you can download the files.

Pro Tools Cloud Collaboration

Not long ago, Avid introduced Cloud Collaboration for Pro Tools in the Pro Tool 12.5 version. This allows Pro Tools users to share parts of projects, or the whole project if necessary, with other Pro Tools users around the globe without even having to close the application. It’s a rather fancy system that seamlessly integrates between different Pro Tools versions.

audio post production.jpeg

Pro Tools Cloud Collaboration gets rid of the traditional audio post-production collaboration process that involved exporting files out of the application followed by sharing them on different cloud services for other collaborators and editors to receive. Now, the 12.5 and above allows editors to collaborate with other Pro Tools users in a much quicker and simpler way.

Source Elements Source-Connect

In case you’re wondering what is Source-Connect, Source-Connect is what replaced the ISDN. Conceived as an industry-standard replacement, Source-Connect comes with a solid set of features for remote audio and sound recording and monitoring, allowing audio and sound professionals to undertake several aspects common in the audio post-production industry such as overdub, ADR and voice-over, regardless of whether the origin of these files took place anywhere in the world, over a decent internet connection integrated to their digital audio workstations.

Source-Connect works as an application, and it does not require complex digital audio workstations setups. It allows audio and sound professionals to work directly in the DAW of their preference, which ultimately allows them to harness the full set of features the application comes with.

Besides, Source-Connect comes with a built-in Pro Tools support, which is also compatible digital audio workstations that almost exclusively support VST plug-ins, including, but not limited to, Cubase, Nuendo, Pyramix, etc.

Audiomovers LISTENTO

Listento allows users to move low latency audio files from Digital Audio Workstations (DAW) to browse through the use of plug-ins. Imagine having a client who cannot physically visit your studio to listen and give you their insights on the final mix you’ve developed. By using Listento to play the mix directly from your workstation master track to the client’s browser, you eliminate such complication.

Listento seems to be still under development. One of the things the software is working on is the future implementation of a built-in chat to communicate with your client, allowing you to move away from third-party app messengers such as Skype or Google Hangouts to discuss the intricacies of your mix with the other individual.

Listento includes several transmission formats, such as:

  • PCM 16Bit

  • PCM 32Bit

  • AAC 128Kb

  • AAC 192Kb,

  • AAC 256Kb (MacOS only)

  • AAC 320Kb (MacOS only)

Additionally, Listento is a free plug-in; however, in order for sound professionals and audio editors to use it, they will be required to subscribe to Audiomovers in order for them to stream audio files directly from their digital audio workstations. Lucky enough, Audiomovers subscription tiers are quite affordable:

  • Weekly: $3.99

  • Monthly: $9.99

  • Yearly: $99.99

When sharing your files, sign up to your Audiomovers account to both send and receive the live stream. Send your client a link like if you were sharing with them a Google Sheets download link. And in case you’re still wondering whether you should pay one Audiomovers tier of service, the software comes with a one-week free trial.

A final word on collaboration: the fourth industrial revolution has come indeed with many pieces of software and hardware that has made possible to collaborate between professionals and studios. It is nonetheless as important to always nurture the collaborative spirit by being willing to work alongside other professionals in a specific workflow. This, of course, demands a more proactive and receptive attitude towards collaboration, otherwise, by not consider other perspectives, the chances of developing and learning something new are lower.

*The images used on this post are taken from Pexels.com

Sound For Documentary

Sound For Documentary

Since the emergence of the sheer array of affordable camera recorders, the rising prevalence of mobile phones with decent video cameras and the ubiquity of social media channels such as YouTube as one of today’s major media diffusion channels, it has never been this easy to produce and subsequently sharing documentary videos. If we were to take a much closer look at the whole production process, it would be easy to assert that sound is the weakest part of many of these videos. Although it is relatively easy to shoot and record with a camera regardless of its quality, the art of placing a microphone, monitoring and taking care of volume levels still remains an ambiguous puzzle compared to the other components that take place when shooting a video documentary.

In today’s post, we going to go through a general outline of practical techniques and an end-to-end guide to the primary tools for recording, editing and mixing sound for documentary audiovisual projects. Whether you are using a mobile phone, a regular video camera, a D-SLR, prosumer or a professional camcorder for shooting your project, the sound will always be an important part of the storytelling.

There are many ways in which tremendously good results can be achieved with consumer gear in many different circumstances; nonetheless, professional gear comes with extra possibilities. Here are some fundamental concepts directors and documentary producers need to bear in mind every time they want to take one of these projects.

Sound, as a conveyor of emotions - Picture, as a conveyor of information

Documentary shooting.jpeg

Think of the scene in Psycho of a woman taking a shower in silence. Now add the famous dissonant violin notes, and you get a whole new experience. That leads to consider the emotional impact of a project, in this case of a scene in particular. Sound conveys the emotional aspects of your documentary. It’s practically the soul of the picture. Paying special attention to sound, both during shooting and afterward in the studio, can make the real difference. No matter if you’re planning on doing a simple interview with plenty of dialogue, an enhanced, or rich sounding, in this case, the human voice is the differentiating factor between an amateur and professional project.

Microphone placement and noise management are key

The main issue with the vast majority of amateur sound recordings is the excessive presence of ambient and environmental noises from all kinds of sources, and a low sound level relative to the ambient noise. As a result, we’ve all seen how difficult it is to understand the dialogues, which is ultimately detrimental to the intended emotional impact. This common situation is one of the consequences of poor microphone placement. Directors and producers need to learn to listen to the recording and experiment with different microphones and different placement options. It all boils down to getting the microphone as close as practical to the intended sound, and as far away as possible from the extra noise that interacts in a negative way with the whole recording.

Additionally, if the documentary takes place outdoors, the chances of getting unwanted wind noise are hight, which is why the use of a windjammer to control wind noise is always a good idea. Regardless of whether you’re a professional or an amateur taking on a documentary audiovisual project, with a little bit of practice and research, you can craft outstanding sound recordings, irrespective of whether you’re recording with professional gear or your mobile phone.

Monitor your recording

In order to craft a compelling and professional recording, you need to properly set recording levels first —not too soft so sound doesn’t get lost in the overall noise; not too loud so you can avoid possible distortion. When recording, always monitor the sound you’re getting with professional headphones in order to avoid possible surprises in the edition. When using digital recording devices, it’s impossible to record anything beyond full scale, so abstain yourself from crossing this limit, as otherwise, the recording will sound hideous, unless your camera or the device you’re recording with as an automatic gain control to adjust recording levels.

The shotgun myth

There seems to be a myth regarding microphones. Apparently, some people firmly believe that the shotgun microphone reaches farther than other devices. This is not true. Shotgun microphone simply does not work like a telephoto lens. Sound, unlike light, travels in all directions. Of course, shotgun microphones work; they have their place, and they really come in handy in somewhat noisy environments, especially when you cannot be as close as the individual doing the talking as you’d like in an ideal scenario. That being said, shotgun microphones are far from performing magic. What they really do is that they respond to sound differently in terms of reduced level, null point, and coloration. Although they look impressive, plenty of sound professionals and directors choose to use different types of microphones for their documentary project.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 3

Mixing Audio For Beginners - Part 3

Here is the third installment of Mixing Audio For Beginners. If you’ve been following this illuminating compilation of the intricacies and the basics of sound and audio post-production, we’re gonna be addressing further topics taking it from where we left off in the last post about Mixing. Otherwise, we suggest you start off right from the very beginning. So, without further ado, let’s continue.

Ambiance

We mentioned last time that when editing dialogues in a studio through ADR, it is no less than pivotal to create the right environment for recording new lines. Every time a sound professional is tasked with re-recording lines and additional dialogue in a studio, they always have to pay special attention to several aspects that, if overlooked, could ruin the pace of the scene. Each dialogue edit inevitably comes with several challenges, like the gaps in the background environmental sound.

There’s nothing more unpleasant than listening to audio or a soundtrack where the background ambiance doesn’t match the action going on from one scene to the other. This phenomenon is highly common during ADR sessions, which is why, aside from helping the talent match the intensity each shot requires, sound professionals also need to edit the background sounds to fill any possible hole in order for the scene to feel homogenous.

The problem is when the production sound crew captures room tone on a specific location and then, once production is finished, the audio post-production crew needs to replace dialogue and fill the holes with room tone. Of course, there are tools to recreate room tones based on noise samples taken from existing dialogue recordings; however, it is indeed one of the most common tasks under the umbrella of audio post-production.

Sound Effects (SFX)

sound effects.jpeg

Whether coming across the perfect train collision sound in a library, creating dog footsteps on a Foley session, using synthesizers to craft a compelling spaceship pursuit, or just getting outside with the proper gear to record the sounds of nature, a sound effects session is the perfect opportunity for sound and audio professionals to get creative.

Sound effects libraries are a great source for small, and even low-budget, audiovisual projects; however, you definitely must not use them in professional films. Some sounds are simply too recognizable, like the dolphin sound every single time a movie, ad or TV show, shows a dolphin. Major film and TV productions use teams to craft and create their own idea of sound effects, which ultimately becomes as important as the music itself, for example. Think about the lightsaber sounds in any Star Wars movie.

After that, additional sounds can be created during a Foley session. Foley, as discussed in other articles, is the art of generating and crafting sounds in a special room full of, well, junk. This incredible assortment of materials allows foley artists to generate all kinds of sounds such as slamming doors, footsteps in different types of surface, breaking glass, water splashes, etc. Moreover, foley artists recreate these sounds in real time, which is why it is normal to have several takes of the same sound in order to find the one that best fits the scene —they are shown the action in a large screen, and then start using the materials they have at hand in order to provide the action with realistic sounds. Need the sound of an arm breaking? Twist some celery. Walking in the desert? Use your fists and a bowl of corn starch.

Music

Just like with sound effects libraries, when it comes to music, sound professionals have two choices based on their talks with production —they can either use a royalty-free music library, or they can, alongside music composers, create a score for the film entirely from scratch. Be that is it may, the director and productions are the ones who have the final say over what type of music they want to use in the project and, perhaps more importantly, where and when music is present throughout the moving images.

Sometimes video editors resort to creating music edits to make a scene more compelling. Other times, it’s up to sound professionals to make sure the music truly fits into the beat and goes in accordance with what is happening. The trick is to make the accents coincide with the pace of the on-screen moving images as the director instructed, and that music starts and ends where and when it’s supposed to.

Mixing

Assembling all the elements mentioned in the first two parts of this mini guide and this article into a DAW timeline and balancing each track and different group of sounds into a homogeneous soundtrack is perhaps where this fine art reaches its pinnacle. Depending on the size of the studio, it is possible to use more than one workstation and different teams working together simultaneously to balance the sheer array of sounds they’ve got to put in place.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 2

Mixing Audio For Beginners - Part 2

According to the previous article, we mentioned the importance of establishing an intelligent workflow in your audio production process. As per defined by the dictionary, the word workflow means “the sequence of processes through which a piece of work passes from its initial phase to total completion.” Such definition, of course, can be integrated with the audio post-production workflow phases in order to see how they work in different types of productions.

Pre-Production

A pre-production reunion is the meeting that gets you together with the production officials, whether it is the production company, director, or the advertising agency before the production starts. If you happen to be invited to this meeting, you can, of course, express your opinions to the production team, which might even save them hours and effort. If they seem to be open to receiving additional creative input, you could help develop the soundtrack at the concept phase. It means that your insights on the project can also have a certain impact on selecting the audio budget, which is always a positive thing. Remember: an hour of proper pre-production will spare you 10 hours of possible setbacks.

Production

Makeup artists make their magic, services are consumed, lights are turned on, actors deliver their best performance, video is shot, audio is recorded, computers are then used to animate existing action sequences, etc., and the pretty much the whole budget is spent during this phase.

Video Editing

Once the visuals have been recorded and created, the director works with the video editor in charge to pick the best footage and assemble the moving images in a way that tells a compelling story. Once the editing has been done, the audio editor or sound engineer will receive a finished version of the audiovisual project that, in theory, will not suffer further changes —that’s known as “picture lock.” This final version of the recorded footage can only be achieved once the deadlines have been met and the budget for those processes spent.

Creating The Audio Session - Importing Data

The video editor is responsible for passing onto audio professionals an AAF or an OMF export compiling all the audio edits and additional media so they can re-create, or create from scratch, their own audio edits. Once sound editors and audio professionals import the files, they will have a much clearer idea of what they’ve got to do.

At this point, audio editors also import the moving images and the edited video, making sure they are in sync with the audio from the aforementioned exports (AAF and OMF).

Spotting

During this phase, both the director or the producer sit down with audio professionals to tell them exactly what they want and, more importantly, where they want it. The entire film or video project is played, so audio professionals can take notes regarding the dialogues, the sound effects, the score, and the music, etc.

Dialogue

Dialogue is perhaps the most important part of the entire soundtrack. Experienced audio editors will always separate dialogue edits into different tracks, one per each actor. Sometimes, when audio is recorded on location, the audio person responsible for recording those tracks often records two different tracks for each actor —a clip-on mic and the boom mic. Once in the studio, the audio professional assesses both tracks and chooses the one that sounds best and is more consistent throughout the entire length of the moving images.

In case of coming across noise on the dialogue tracks, a common technique that sound editors employ is using noise reduction tools or similar software to repair that audio without compromising the final mix.

ADR

We’ve covered ADR before in previous posts, just in case you don’t know what ADR means.

Shooting film and ADR.jpeg

If, after having used the techniques mentioned in the last paragraph, the audio cannot be repaired through the use of noise reduction software, audio professionals always resort to performing ADR.

ADR means having the actors and the talent go to the studio to carry out several tasks, such as:

  • Replace missing audio lines

  • Replace dialogue that couldn’t be saved

  • Provide additional dialogue in case of further plot edits.

Actors have projected their scenes so they can recreate their lines. Normally, a cue is used to make sure they record in sync with what’s going on in the film. They also do four or five takes in a row, since the scenes are projected in a loop over and over (hence the word looping). The sound editor or audio professional then picks the best line and the best performance and replaces the original noisy/damaged take with the newer version. In order to match the intended ambiance, sound editors may use the same mich as the original take, but they will likely have to use further equalization, compression, and reverb to make the new performance be in synch with the timbre.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 1

Mixing Audio For Beginners - Part 1

Have you ever wondered why your favorite films or TV shows sound so good? Or why TV ads and commercials are sometimes so much louder than other films and TV series? Or why that internet video that you like the sound so bad?

In this mini-guide, we want to go through the intricacies commonly associated with the creation of sound, audio, and soundtracks for both video and film. Crafting and mixing audio for film and video is a rather profound issue; covering all the basics would take hundreds of pages, due to the constantly changing nature of this business and the technology involved.

This first part covers basic aspects, a bit of background, some terms and terminology, and hopefully, will serve as a clear guide to understanding what mixing audio for video and moving images is about.

The World Of Audio For Video

Way back in the ages of the past century, recording engineers would often face a daunting dichotomy: they often had to make a career choice between either producing music or producing sound and audio for visuals and moving images, such as TV series, Ads, Films, etc. Since the aforementioned career choices were considered specialized assignments, they demanded specialized tools get everything done.

The inclusion of computerized digital audio systems in the late 80s made it possible, and definitely much easier, to use the exact same recording tools to produce and edit both music and soundtracks. Perhaps, if you’ve had any experience with audio post-production, tools, and systems such as AVID, NED PostPro and the early pro tools might ring a bell. That era marked the beginning of a new dynamism where terms such as convergence —where the lines of both worlds of audio and video production intertwine— started to become popular. As a result, the vast majority of engineers had to learn to do audio post-production sessions during the day and music sessions at night.

Be that as it may, the process has undoubtedly evolved throughout the years, and the modern and contemporary process of audio post-production has changed more than ever before.

Types Of Audio Post Production

In order for us to discuss the types of audio post-production, we need to start by making a necessary distinction between what is commonly referred to as audio and other types of soundtracks like radio commercials, audiobooks or the well-known podcast. Though a lot falls under the umbrella of audio post-production, we commonly mean by audio post-production as the audio especially crafter for a moving image or a visual component. Here are the most traditional forms:

Television

TV shows can be practically any length, but the vast majority of US TV programs are intended to last between 30 to 60 minutes. Many are produced by highly qualified and experienced TV studios in Los Angeles. As for Reality Shows, although these can be shot and recorded anywhere, they also require a good and experienced audio post-production team to mix both audio and video in a professional fashion.

Film

film making.jpeg

Films vary in their nature. Short films can span just a few minutes, whereas longer films can last several hours. This category includes today’s production for Netflix, HBO, and Amazon, as well as the famous traditional major studios. When talking about a film, it is also important to mention the financial aspect: independent filmmakers, known for producing small to no-budget projects still require an important dose of audio post-production. In fact, many sound engineers are fond of taking on these projects as it serves as the perfect opportunity to get some training prior to taking the big leap.

Commercials

Commercials include several types of visual projects. The term “commercials” often refers to TV commercials, infomercials, ads, promos, political ads, etc. The nature of the aforementioned types of commercials is basically known for its rather short format —today, it is possible to come across commercials ranging from 5 to 60 seconds in length. There are of course much longer commercials; however, it is rather expensive pretending to buy airtime for something longer than sixty seconds.

Video games

Video games are extremely fun. And crafting audio for video games is even funnier. The vast majority of top-quality games, also known as AAA games, have behind a dedicated audio post-production team responsible for creating and capturing the sounds that will be included in the game. This, of course, is absolutely unique to every single game, and certainly demands a daunting amount of work, requiring hundreds of audio files, as the game will demand soundtracks in different languages, which ultimately increases the number of files the audio team will need to manage.

Audio Workflow

The process through which a piece of audio work completes initiation to completion is known as a workflow. And although we will get into more detail in a subsequent post, a traditional audio workflow is comprised of the following stages: pre-production, production, video editing, data import, spotting, dialogue, ADR, ambiance, sound effects, music, mixing, delivery, summary.

*The images used on this post are taken from Pexels.com

 ADR: Tips And Tricks

ADR: Tips And Tricks

Automated Dialogue Recording, or ADR, is an essential part of every audiovisual project, but knowing its intricacies is key when it comes to becoming a proper filmmaker. ADR, as many people like to call it, is basically a method of adding dialogue to an already filmed scene. By superimposing dialogue that has already been recorded in a studio, or at least in an acoustically treated room or space, filmmakers can get past the challenges commonly associated with location dialogue. The problem with location dialogue is that it oftentimes results a bit hectic when environmental noises are too high and difficult to mute, the equipment doesn’t work the way it is supposed to do, or when the crew cannot get the right background noise.

When it comes to films, almost every contemporary Hollywood film has 50% to 70% ADR dialogue. ADR is no less than pivotal for the success of any film, and if executed the right way it can definitely salvage an entire film.

The Basics Of ADR

Before we get into more detail, there are several elements associated with ADR that filmmakers must bear in mind so they can plan and set up their recordings properly. By looping, existing playback of a repeating loop from the project is given to the recording crew while simultaneously recording new voices and dialogue. There are two different types of looping: audio looping and visual looping. With the latter, an actor listens to the location take or recording several times to understand the nature of that scene in particular and get a feel of the situation prior to recording the new dialogue. Once they’re ready to record, they will not hear the location take but will take a look at the scene to match lip synchronization. They always hear themselves over the monitors so they can hear the lines they’re delivering in real time.

Audio looping, on the other hand, will traditionally produce the most desirable outcome. However, it is important to mention, it is normally more time demanding. The session is carried out the same way as visual looping, cutting the video monitor and hearing the original dialogue track. The vast majority of ADR engineers are fond of using both techniques simultaneously. They always break up the looped lines into much smaller parts so they don’t lose consistency and synchronization. As for synchronization, for better sync when starting a line, ADR engineers record three beeps exactly one second apart each, so actors know when the first voice starts. This is known as an audio cue; like a metronome, so actors can start in the right moment under the proper rhythm of the line being recorded.

An ADR Recording Space

In sound and audio post-production, filmmakers have essentially more control over audio than they do when recording on location. The basic goal of each audiovisual project is to provide the audience with lots of experiences, and audio is not the exception. When it comes to ADR, the main idea is to get a really clear and clean ADR recording so ADR engineers can put the dialogue in an acoustically treated space with proper equalization.

ADR Equipment And Gear

microphone-audio-computer-sound-recording.jpeg

When recording ADR in an acoustically treated space such as an audio post-production studio, sound engineers and ADR professionals often try to use the same microphone the filmmaking unit used on location to capture the existing and original dialogue. The goal of ADR is to compellingly and adequately match the lines in both tonal characteristics and frequency response to the lines recorded on location. Since all microphones have different polar patterns and different frequency responses that yield different tonal nuances, it’s important, not only to use the exact same microphone—or at least a similar one—, but also to place them properly so acoustic features don’t get lost.

There are several digital audio workstations such as Pro Tools, Ableton Live, Logic, etc., that can help ADR engineers loop their recordings according to their needs. ADR demands, aside from microphones, other audio production software. A basic ADR toolkit looks like this:

  • Microphones

  • Digital Audio Workstation

  • Headphones

  • Preamp or Interface

  • Video Monitor

Microphone Placement And Delivery

Mic placement depends heavily on what type of microphones are being used. It is key to maintain a certain distance between the mic and the actor or actress to provide the recording with realism. Also, some ADR engineers are fond of using filters when deemed necessary. How an actor or an actress delivers the line is also pivotal for the success of the recording, as it affects the delivery itself and the tone of the ADR recording. Some actors tend to replicate the same movements being projected in the moving images, as it aids them in creating the exact same mood the filmmaker wants for that specific scene.

*The images used on this post are taken from Pexels.com

6 Tricks For Foley Sound Effects

6 Tricks For Foley Sound Effects

Foley artists are pivotal for any audiovisual project once it has been shot and edited, as they’re responsible for taking care of any possible missing sound, and, as described in a previous article, a crucial step in the audio post-production process is also what foley artists can do: perform and create sound effects to match the moving images being projected on the screen.

Common sound effects we always hear in movies for example footsteps, chewing, drinking, clothing movement, doors being opened, keys jingling, etc., are created through a set of different recording techniques and materials. Foley is more than simply manually editing sounds. In fact, it not only is more than that, but also more time efficient, and provides audiovisual projects with a much richer character and realism to other sounds in the film. Whenever a foley artist can’t create a sound in the studio, sound designers and sound editors will be always up for the task.

That being said, have you ever wondered what’s the best way to mimic or recreate the sound of a fight? The sound of fists going back and forth and hitting another body? Or how can you recreate the sound of footsteps in a snowy road in a recording studio? What’s the best way to mimic a sword fight? Here are some tips for coming with foley sound effects:

HOUSEHOLD SOUNDS

Wooden Creaks And Floors

People stepping on creaking wood and squeaking floors appear in practically every film you’ve seen. Footsteps on old floors or people walking over an old house porch are perhaps one of the most used scenes in films. Foley artists have at their disposal a sheer array of floors and objects to recreate these sounds. The advantage of using these accessories is that the sound, in this case, the creak or the squeak, can be to some extent controlled. Once Foley artists have developed a proper technique, coming up with these sounds and performing these creaks saves the picture a lot of time, as sound editors won’t need to edit all sounds on Pro Tools.

Fire

Fire is one of those sounds that also always appears in the vast majority of films. Foley artists often resort to accessories such as cellophane, potato chip bags, and even steel wool. The most common technique for recreating fire sounds is to scrunch up the accessory and then release it; the effect will be, of course, rather subtle, but when recorded with the mic closely a somewhat low-level fire sound will be achieved.

Cash

cash sound.jpeg

Money and stacks of cash have their own sounds as well. Traditionally, whenever a foley artist has to develop the sound of cash, they often resort to an old deck of poker cards or book pages. In order for foley artists to successfully achieve this sound is to use accessories, in this case, paper sources, with flexible and softer textures. In fact, the vast majority of the time, foley artists add actual bills in the middle of the paper roll, or on the top, or on the bottom, so they fingers actually brush its surface, creating the sound of cash.

ANIMALS

Horses

Galloping horses is one of those sounds whose technique to achieve it has practically remained untouched. Foley artist normally uses coconuts to recreate horse hooves, and it’s probably the most well-known foley accessory thanks to Monty Python and The Holy Grail. Several foley artists suggest stuffing the half coconut with some materials such as fabric in order to get a more realistic sound. Then, hit a compact dirt or whatever surface the horse is running on with the stuffed coconuts.

Bird Wings

Just like with horses, in order to achieve the sound of birds flapping their wings or taking off, foley artists normally resort to traditional and really orthodox accessories such as a vintage feather duster or gloves. It’s also important to experiment with different materials and perhaps heavier textiles to create a much thicker sound for larger species. An old feather duster can create a terrific effect if the foley artist can find a nice sounding one and hit it against all kinds of surfaces and objects to create different sounds.

HUMANS

Inhaling A Cigarette

smoking sound effect.jpeg

Ever wondered hoy films record the sound associated with a cigarette inhale? Foley artists often use saran wrap and other light materials to get this sound. By using saran wrap, you can get a similar sound to the fire sound mentioned above; however, it’s more subtle. Nonetheless, it is produced the same way as you would produce the fire sound: compress and then release, but make sure to do it controlled so you don’t overdo it. Make sure to have the mic close enough so you can capture the desired level of subtleness; otherwise, you may obtain a totally different sound.

*The images used on this post are taken from Pexels.com

An Introduction To Decibels

An Introduction To Decibels

What You Always Wanted To Know About Decibels

Many times in previous articles we’ve mentioned the word “decibel”. Of course, the world of sound and audio basically revolves around decibels. But what in reality does the concept of decibel entail? Here is our view on the decibels and how internalizing the concept can be useful if you either work as a sound designer, sound mixer, or even within the audiovisual industry. So, first things first: when it comes to trying to define decibels, there’s no better way than to put it this way: decibels are odd units, and there are at least three main reasons for such definition:

Decibels Are A Logarithmic Unit

When it comes to unveiling the intricacies of the definition of decibels, we first need to mention one of its aspects: a decibel is a logarithmic unit. Of course, our mind is not traditionally fond of logarithmic units, mostly due to the fact that we’ve become accustomed to deal with other types of units such as distances or weights, which are also present in our lives every day. Nonetheless, the concept of logarithmic units is highly useful, especially when we want to represent a sheer array of different figures or values.

If we were to take a value and make it 3, 4 or even 5 times bigger, we would see that the resulting figure would get incredibly huge on a logarithmic scale unlike on the traditional linear scale. Why? The reason behind this evident difference is that, while linear scales are based on multiplication, a logarithmic scale is based on exponentiation. Thus, if we were to increase the number 10 5 times, we would get to the value of 100,000. That indeed is really convenient whenever we want to get the full picture of a set of data ranging from dozens to even millions.

Some other units simply work fine on the regular linear scale, as we normally move within a rather small range of figures. That’s why it’s easy for us to measure the distance between cities; but what if we wanted to measure the distance between cities throughout the galaxy? (Of course, assuming we’re such an advanced civilization, that we managed to find life in other planets.) If we were to use a linear scale to represent the difference in distance between Los Angeles and Orion, the difference would be 1200000000000000 km, which is undeniably a really tough figure to look at; however, on a logarithmic scale, the difference would be just 16.8 log km.

The logarithmic scale offers a solution for this issue since it seamlessly provides an easy-to-understand figure while covering several order of magnitude. Like the cities used above as an example, some other natural phenomena can be expressed on a logarithmic scale, since they span through several orders of magnitude as well. Think of earthquakes, pH and, of course, sound and loudness. By using a logarithmic scale to measure and express some events, we can get a more accurate version of the models of nature.

Decibels Are A Comparative Unit

Once stated that decibels are a logarithmic unit, we have now a way to simply scale and measure different events, ranging from a simple whisper to a rocket take-off. Nevertheless, it’s not that simple. Every time we say something is 70dB, we are not making, in reality, a direct measurement —in fact, we are comparing two different values.

Decibels are the ratio between a specific measured value and a reference value. Simply put: decibels are a comparative unit. Stating that something is 30dB is as incomplete as saying that something is 30%. Thus, we need to specify the reference value we’re using, or, in other words, 20dB respect to what? What kind of reference value can we use then? And that’s what brings us to the third and last dimension.

Decibels Are A Versatile Unit

Given the fact that the vast majority of people associate decibels with sound, it’s clear that they cannot associate its measurement ratio with the value of any other physical property. These properties can be also associated with audio, like pressure or voltage, or may have little or even nothing to do with audio, like reflectivity. Decibels are found across all industries, not only audio. Take, for example, video, optics or electronics. So, after laying out all this information, what’s a decibel? A decibel is a logarithmically express ration between a pair of physical values.

audio mix console.jpeg

Screaming In Outer Space

No matter how much Star Wars tries to convince us of the possibility of actually conveying sound’s energy in outer space, reality dictates otherwise. Sound’s energy requires a physical medium to go and travel through. When sound waves disturb such mediums, there are actual measurable pressure alterations as the atoms end up moving back and forth —the louder the sound, the more intense the alteration is.

In Summary

A decibel is based on the logarithmic scale which, of course, works very well when displaying a large range of values. It is also a comparative unit that always uses the ratio between the measured value and the value used as a reference. Additionally, decibels can be used with any physical property aside from sound pressure. They also use reference values so the numbers being managed are more significant.

*The images used on this post are taken from Pexels.com