A Practical Guide To Understanding The Audio Post-Production Process - Part 2

A Practical Guide To Understanding The Audio Post-Production Process - Part 2

It’s not rare for audio and sound professionals to get requests to see if they can either improve and or even repair less-than-ok production sound. Sometimes a track that is full of environmental noises and distortion can be improved and repaired to a certain extent so that the audio can be used in the post-production process.

That being said, that homemade footage that was shot next to the washing machine might be salvageable by applying noise reduction software.

Key Milestones For The Audio Post Production Process

1. Sending Video and Audio to the Sound House

The key part during this stage as an audio professional is understanding how the information and the data need to travel from one point to another. You should request your clients to provide you with the details on how you want to receive the project. Details traditionally include specs on how you want to receive video codec or where you like the timecode window burn to be placed.

Also, it is important to always request footsteps, lips, and subtitles. Audio normally gets to the sound house in the form of either an OMF or AAF export. The organization of the tracks and audio handle length is also a pivotal part of the export, so make sure to include these as well in your specs. Depending on what kind of export a client provides you with, there are some hurdles you’ll need to get past, so it is always a good idea to discuss these issues with your customers before they export their tracks.

Also, it is never a bad idea to suggest a test export before you receive the real thing. This will spare you endless nights working out drawbacks and bugs endlessly before the spotting session.

2. Spotting Session

The spotting session is when the entire team comes together to discuss the entire project. During this phase, editors normally focus on taking a myriad of notes whilst audio and sound professionals focus on the intricacies of sound design and client expectations, which kind of circles back to the budget issue we discussed in the first part of this guide.

The spotting session is all about making ideal suggestions. In fact, it is normal to see both the dialog editor and the picture editor discussing mic alternatives and whether or not certain tracks can be used. Of course, audio and sound post-production is all about the details, which is why this phase is also a good opportunity to discuss the details of the project, and decide whether that car honking needs to stay or say goodbye.

Normally, film producers and sound professional don’t seem to agree upon how they feel towards the vast majority of sounds, lines, and tracks, which is why the spotting session is so pivotal.

3. Editorial Work

Once the spotting session is over, audio and sound professionals take care of the project basically on their own. There is practically no room for anybody else in the making and designing of the audio. Normally editorial work requires help from the film crew (traditionally the producer or director) when the dialog edit contains words in a foreign language or when specific sounds need to be designed.

However, it is your responsibility as an audio professional to get past this “hurdles” —most audio pros sometimes will bring in an individual fluent in that specific language to make nothing is missing or altered during the final dialog edit. Additionally, whenever the project requires a more complex sound design, they’ll also bring in the crew and the director to discuss the complexity of that area in order to prevent it from sounding unrealistic in the final mix.

4. Premixing

Once the editorial work is concluded, the vast majority of sound and audio professionals are fond of performing a premix. The premixing process allows you as an audio pro to perform extra work that will end up enhancing the project’s final mix, making the audio sound spotless.

pre mixing for audio post production.jpg

Matching the sound of the audio across all edits, using a sheer array of different mics is something that needs to be done in addition to dialing in the loudness of all audio tracks. It is also normal to use the premixing stage to perform audio restoration and noise reduction.

5. The Final Mix

The final mix is when directors and film producers start to hear the film as it should sound. It is important to mention, however, that there might be different approaches to the whole sound production process just like no two people cook a steak the same way. While there may be similarities in what this job is about, the inventiveness and approaches to creating the best sound for an audiovisual project vary from audio professional to audio professional.

*The images used on this post are taken from Pexels.com

8 Tips To Record High-Quality Video Voiceovers

8 Tips To Record High-Quality Video Voiceovers

Any video project needs to look professional. The vast majority of videos and films have the potential to impress audiences —what producers and directors need to do is match the moving images with some amazing audio to add the finishing touch. Any video needs both elements, audio, and good quality moving images, to work properly. If either is lacking, then the project is going to come across as an amateur.

Having said that, here are 8 tips for capturing and recording incredible video voice overs:

It’s All About The Location

And by location we mean: always strive to find the right spot. If you’re just getting started and you can’t afford a professional studio, then you will need to find the best spot in your home to record. Read your lines aloud in each room and listen carefully to find out which room seems to be more suitable. Listen out for any issues that can be easily fixed, such as reverb or dead sound.

The Popcorn And Seashell Test

Try to stay away from plosive and sibilance —these are fancy words for the traditional popping and hissing noises that you frequently hear during recordings with words that begin with either ‘S’ or ‘P’ in them. It’s, therefore, a good idea to invest in a pop filter. A pop filter is a shield that sits between the speaker and your mic, and you can find a broad range of them with prices between $10 - $25.

However, if you’re still considering whether you really need a pop filter, then try the popcorn and seashell test. Try listening to a recording of yourself saying words that begin with the aforementioned letters to see if you can hear any hissing or popping.

Get A Stand

Regardless of whether you’re close or far away from your script (physically, of course), it’s likely that you will be required to turn the page whilst recording your voiceover, damaging your recording with the noise of the paper rustling.

Invest in a book stand to rest your scrip and hold it still. Also, consider the number of breaks in your script, print it off in a way so you can display each section without having to either touch the paper or turn the page.

voiceover and post-production.jpg

Listen Carefully

Recording your lines ‘deaf’ and hoping to correct and fix any possible issues during the editing stage is certainly a bulletproof recipe for disaster. A good pair of headphones or monitors are just as pivotal as a high-quality microphone. You’ll pick every detail that might have missed if you had proceeded the other way around. You will be able to monitor the quality much more closely, and it’s likely going to be much easier to retake the recording than to edit the imperfections at a later stage.

Don’t Force It

Although your script may read beautifully on the pages you’ve written, it could be a totally different thing once you read it aloud. Strive to keep it concise and easy to pronounce. Then, make sure that you practice reading it aloud a few times before giving it to the performers. That way you can certainly spare yourself a more troublesome audio post-production phase.

The Power Of Apples

It’s been said that in order to get the best vocals from either you or your speaker, you will need to have your mouth slightly wet. However, if you are constantly taking short sips of water, then you will probably spend more time in the bathroom than in front of the microphone. A great way to counteract this is to keep a tart apple at hand. A bite will definitely clear the decks and the vocals will sound much clearer and neater.

Don’t Quite Just Like That

Have you ever heard the song Don’t Stop Believing by Journey? Simply because your voiceover performer is not delivering the performance you expected and has turned themselves into a fit of giggles, or simply because the mic is not getting the audio you envisioned, that doesn’t mean that the whole project is bound to fail. Simply take a break, analyze the good things you’ve done and try to establish whether or not you were following a different methodology to get those good results and apply it back to the tricky parts. Nobody said it would be easy.

And Don’t Forget To Save

The worst thing that could happen in this particular industry and those alike, once you’ve worked so hard on your project, creating the nuances, recording different lines, etc., would be to have to start everything again from scratch… simply because you didn’t save your progress. Always make sure hit the save button as you progress. In fact, analyze whether the size of your project requires an external hard drive so you don’t run out of space in the middle of your work.

*The images used on this post are taken from Pexels.com

A Practical Guide To Understanding The Audio Post-Production Process - Part 1

A Practical Guide To Understanding The Audio Post-Production Process - Part 1

When talking about audio post-production, one cannot avoid one simple question: what’s the typical process most audio professionals go through when working either with a sound house or by themselves? In fact: is there a predetermined process? Everyone seems to work a little differently; however, there are certainly common steps and some common ground that you, as an audio enthusiast or audio professional, will encounter when working in an audiovisual project.

Finding Your Location

Today, any film’s soundtrack can be get done and mixed in all sorts of places and different facilities. In fact, would it be too much to say that you could even have someone do the whole sound work if you were the owner of a simple sound facility with a fistful of rooms and editors? The short answer would be: it depends.

There are many creative people working in this industry who started off using their own bedroom as their main editing and mix room. If the project you’re currently working on doesn’t require a surround sound mix and is simply going to be broadcasted and heard online, then almost everyone with a basic setup and some experience would be able to take care of your needs.

Likewise, if your documentary is about the Second World War and you’re going to hear bullets passing over your head and missiles blowing up war fields, then you’re going to need a much larger and professional room.

The process of how to find the right sound facility isn’t different from finding people to work with. In this industry, aside from the final pieces of work, which you can use to judge whether someone has skills, word-of-mouth seems to be the best source of candidates. If you’re seeking to gather a team together, ask around. If you remember an audiovisual project where the sound was definitely special, find out who was in charge.

When looking for sound professionals or an audio post-production studio, it is key to find a studio or group of people well known for their creative capabilities. Some might even say that there isn’t too much thought behind the sounds that are required for a film: “If you see a car, make sure I hear the engine once in a while.”

Working with a professional studio and a creative team will take your project to the next level, perhaps questioning the need for the engine sound, as described above, knowing that not hearing it will create another layer within the storytelling. It’s the job of the audio professional and mixer to present you with ideas and different approaches when appropriate. It should be part of a seamless chain of events and discussions and never an argument or a struggle.

In the long term, as a producer or executive, if you want an engine sound you should get your engine sound, but your audio professional might ask you what kind of emotion you’re looking for. Your first contact with a studio should lead to a conversation about the storytelling, the story itself, style, and sound needs for your project. The studio should also look at a fine cut of your film. That way they will be able to anticipate how much work is going to be required to do edit all the sounds, create the sound effects, foley, etc.

This part of the process is mostly known for raising artistic and practical questions that will certainly help the studio, the team, and you envision a much clearer version of the final cut. For instance, do you want to hear the sound effects under silent footage? Your main audio professional should provide you with a sense of the quality of your production sound in order to determine where the budget should go.

Making The Most Out Of Your Budget

Budget, that famous and no less crucial word. As audio professionals, we always try to find out how much the filmmaking team has set aside for both sound prep and mix. This is merely done in hopes of developing a plan that fits your needs as a filmmaker. It the budget isn’t close to what a traditional job would cost, taking into account the project’s needs and running time, then it’s best to state that from the very beginning.

Budget for sound design

Money is, of course, an important factor to ponder and consider, which is why knowing beforehand how much the filmmaking team is able to invest is crucial for saving time. If, as a filmmaker, you’d rather not say what your budget is, normally a studio can quote a wide price range and explain the scope of each version of such a quote. Once the budget issues have been taken care of, the next step is to establish and get the project into the audio post pipeline.

*The images used on this post are taken from Pexels.com

The Importance Of Mastering

The Importance Of Mastering

The whole idea behind the mastering stage throughout the whole audio and sound post-production process is to make audio sound the best it can across all platforms. Music, just to cite an example, has never been used or, rather, consumed across more platforms, formats, and devices than as of the last decade.

In fact, even if you’re an audio enthusiast recording and mixing at a million dollar studio, or working with different soundtracks at your very own home studio, you will always need the final quality seal of approval of the mastering stage. Thus, your resulting sound will be heard the way you, or your client, first envisioned it. A well carried out mastering job makes sound and audio consistent and balance. Without this pivotal part of the audio post-production process, individual tracks might feel disjointed in relation to each other.

The Difference Between Mastering And Mixing

Although mixing and mastering do share certain similarities, techniques, and tools, both processes are often mistaken and confused, as they are, in reality, different. The mixing process traditionally refers to what in the audio and sound post-production industry is called a “multi-track recording”, whereas mastering is the final touch; it refers to that final polish audio professionals apply to the whole mix. Let’s take a close look:

Mixing

Mixing, as mentioned above, is all about getting all tracks and audio elements to work with each other. If we were talking about mixing a record, the mixing part would be getting individual instruments and voices to work as a song. It’s, essentially, making sure everything is in place.

Once the audio professional deems they have a good mix, it should then easily flow into the mastering process.

Mastering

Now that we have given the mixing process its proper context, think of mastering as the final touch. In fact, there’s no better analogy than to think about both mixing and mastering as a car: mixing would mean getting all the parts working together, and mastering would mean getting the best car wash ever. You certainly want your new car to look as shiny, slick and cool as possible.

Mastering takes a closer look at everything in the mix and makes it sound as it is supposed to sound.

To provide a bit of history on the mastering process, it is worth mentioning that in 1948, for example, the first mastering engineers were born amidst the birth of the magnetic tape recorder. Before this, there was practically no master copy, as records used to be recorded directly on 10-inch vinyl.

In 1957, the stereo vinyl came out. Mastering engineers started to think of ways to make all records sound a bit louder. At that time, loudness was an essential factor for better radio playback and, of course, much higher record sales. This marked the birth of the well-known loudness wars that still go on in actuality.

Fast forward 30 years. In 1982 the compact disc brought a total revolution for the mastering process. Vinyl masters were totally replaced by the digital era, although many of those analog tools remained the same. Nonetheless, that finally changed in 1989 when the first digital audio workstation (DAW) and the first mastering software appeared, offering a high-end, no less than mind-blowing, alternative to the mastering process.

How Is Mastering Carried Out?

Mastering as a sub-phase of the audio and sound post-production process has its own complexity. Here are some of the most traditional techniques involved:

Audio Restoration

First, a mastering engineer or audio professional fixes any possible alterations in the original mix, like unwanted noises, clicks, pops or hiccups. It also helps to correct small mistakes or alterations that can be noticed when the un-mastered mix is amplified.

Stereo Enhancement

This technique is responsible for dealing with the spatial balance (left to right) of the audio being mastered. When done right, stereo enhancements allows audio professionals to widen the mix, which ultimately allows it to sound bigger and better. The stereo enhancement also helps to tighten the center image by focusing the low-end.

stereo enhancement.jpg

Equalization (EQ)

Equalization or EQing takes care of all spectral imbalances and improves all those elements that are intended to stand out once the mix is amplified. An ideal master is, of course, well-balanced and proportional. This means that no specific frequency range is going to be left sticking out. A well-balanced piece of audio is supposed to sound right and good on any platform or system.

Compression

Compression allows audio professionals and mastering engineers to correct and improve the dynamic range of the mix, keeping louder signals in check while making quieter parts stand out a little bit more. This allows the mix to reach the required level of uniformity.

Loudness

The last stage in the whole mastering process is normally using a special type of compressor called a limiter. This allows audio professionals to set appropriate overall loudness and create a peak ceiling, avoiding any possible clipping that otherwise would lead to distortion.

*The images used on this post are taken from Pexels.com

5 Pro Tools Mixing Tips Ideal For Audio Post Production

5 Pro Tools Mixing Tips Ideal For Audio Post Production

Pro Tools is definitely regarded as one of the most coveted tools for audio post-production processes. Regardless of whether you work for a studio or you’re an audio enthusiast, you’ve certainly come across this software. But, do you know how to use it to its fullest extent? Here are 5 go-to tips for a more professional mix.

Mixing audio for films, television and ads is a rather misconceived process, even for audio professionals with a high-level or experience and understanding. There are many ways of approaching a mix to optimize the workflow and save time, get better results, or even leave your personal mark on an audiovisual project.

1. Use track groups to attain a reverb mix for all dialogues and effects tracks

Mixing reverbs is often highly time-consuming if done one track at a time, especially when everything a scene needs is a minor tweak or a bit of reverb for all dialogues and effects. Thus, more specific alterations and changes can be carried out track by track depending on what the scene and the project needs.

2. Use pink noise to fill out the low end in all background sound effects

Many audio professionals seem to agree that lots of background sound effects don’t possess a cinematic low end, which is essential to provide a specific scene with the life it needs. By adding pink noise you will be able to use all background effects that possess the desired high and mid frequency content, but might be not as good in terms of low frequencies.

By opening up an Audiosuite signal generator in Pro Tools, you can select the Pink Noise waveform. Afterward, make a selection of the same length as the scene you’re working on, on a stereo audio track. Make sure you click on ‘render’ to create the desired pink noise audio file, and apply clip effects with a low pass filter and a heavy boos of equalization.

3. Create distance between elements in the mix

The vast majority of dialogues and sound effects sometimes feel too close to each other in the mix. Of course, by adding reverb you can aid this situation a bit; however, often such distance remains whilst the size of the space is therefore augmented. By following this method you can change things for the good: add an EQ-1 band to an insert on the track you’re going to be working with, or, otherwise, use clip effects. Afterward, make sure you set a high shelf and drop it down with automation every time a particular track requires more distance. Finally, use reverb freely.

4. Adopt a multi-step process to lower noise in dialogue tracks without affecting the original recording

Noise reduction will always be a pivotal part during every audio post-production process. In fact, it still remains a contentious theme in the whole industry. The vast majority of audio and sound professionals don’t really know the extent to which it affects the dialogue portion of the signal, but that is often because it has been used and, perhaps, misunderstood, heavily.

Thankfully, by adopting a multi-step process you can end up having with well-restored dialogue tracks and a much higher signal to noise ratio without the common unpleasant consequences of doing it the other way around.

Start by inserting a high pass filter with a soft slope in order to reduce low-frequency noises. Then, place a gate/expander on the desired dialogue track whilst ensuring a rather low ratio and low threshold slightly below the dialogue level. This should drop the noise down between dialogue lines a little bit. Apply a very subtle live noise reduction, and layer in a mono room tone into the scene.

dual microphones.jpg

5. Blend foley and ADR into the mix

As mentioned in other articles, Foley and ADR are known for being difficult to blend well into a scene. A much better quality recording set, using similar microphones, and the actual ADR/Foley job can definitely provide this whole situation with the aid it needs to achieve much better quality.

Thankfully, there are other steps you, as an audio professional or enthusiast, can take in order to step the transitions up to a notch between looped lines and original lines, or foley with existing dialogue tracks.

If the scene was shot in a reverberant space, it is very possible that you will need to apply a mono impulse response reverb on every ADR and foley tracks. Try to find impulses that resonate with your locations as close as possible.

If both ADR and Foley still feel a bit detached in the general mix, experiment with the Pro Tools Lo-Fi to soften it a bit more. The idea is to achieve a natural movement and flow in all tracks. Make sure you take care of any ADR or foley peaks before softening it a bit more with the plugins.

*The images used on this post are taken from Pexels.com

Creating Signature Sounds

Creating Signature Sounds

Creating and designing signature sounds is a skill that is definitely pivotal for setting yourself up as an audio professional. But how do high-quality professional sound designers actually come up with this kind of sounds? It all starts with the nature of the project they have to work on.

The Project

There are many projects out there that require a sheer array of signature sounds. Ranging from animated series to videogames, signature sounds are as essential as any other sound element that makes it into the final cut. Audio professionals are normally brought into the process at a very early stage, especially during animated series, more specifically during the first animatic. For those who are not familiar with the term ‘animatic’, an animatic is a crucial element within the animation industry; is basically a video of storyboard panels timed to work in sync with the dialogue.

Adding sound elements and sound design to an animatic can serve a number of purposes: it can provide the animatic with the life it needs for the animation studio to better understand how to animate crucial moments. It also can provide executives with a much deeper and better understanding of the action when going through the whole animatic for final approval. And last but not least it can definitely set signature elements early enough so that sound elements can shed some light on the process the animators must follow.

The Process

First: Brainstorm About Aesthetics

Depending on the nature of the project, as an audio professional you need to first identify what are those key elements that are present in the series or the audiovisual project. The idea, of course, is to come up with a way to incorporate sound elements in a way that these are nuanced and special.

When it comes to designing sounds for a project, it’s also important to consider the audience, as it would also help to determine what kind of soundscape the project needs. This step is crucial, for the audience needs to be familiar with the sounds they will be hearing, otherwise, storytelling might be affected. If the project is geared towards a much younger audience, for example, then creating sounds that are familiar to kids or preschoolers would be more suitable.

When it comes to crafting signature sounds, the set locations normally look way different than traditional sets —they look really high tech, which makes the process always entertaining. Since signature sounds are often used in audiovisual projects that rely on the same technology (animated series, video games, etc.), incorporating sound aspects always depend on creating new sci-fi sounds for all the elements that are present in the project. This, of course, boosts creativity as there are many challenges and hurdles to overcome.

Second: Separate Stand-Out Signature Sounds From The Rest

Although many would say that every sound that appears in projects like these is definitely a signature sound, there are several nuances to that assertion. Some also might think that it is definitely a waste of time to come up with a special sound for traditional sounds such as a door being opened or hand grabs. However, creating a whole new soundscape from scratch for all reusable elements can ensure not only stand-out sound design elements but also a solid signature aesthetic for the entire project.

mixing audio.jpg

Many audio professionals are fond of this idea, and they normally decide that the sounds for all the things in the main set should be signature: hand grabs, opening doors, furniture, mechanical elements, etc.

Third: Come Up With A Custom Recording List

When coming up with a custom recording list, it is always a good idea to brainstorm elements to record which might provide the overall aesthetic with more support. It is advisable to map out what to record for each signature element that will be present in the project, and for that, it’s advisable to think about the general aesthetic that needs to be achieved. What items help achieve the soundscape the project needs?

Focus on all the elements the project demands and try to create a checklist of what will be needed. And for that we mean: go outside (literally). Some audio professionals pay stores a visit looking for interesting items that will help them create the sounds they need. Test every element out, paying special attention to how it sounds. In animation, for example, the best recordings are normally made from items which are different from what the audience sees on the screen.

Audio professionals, regardless of whether they work on crafting signature sounds or not, always focus on one single thing: achieving a certain texture and a certain sound. They don’t simply want to just record the exact same element that will be shown in the moving images and on screen. Foley plays a vital role in this part of the process. When choosing items to record, it’s highly important to shut off the visual part of the process, as the brain always tells to go for the obvious.

*The images used on this post are taken from Pexels.com

How Warner Brothers ended up establishing the sound for the film industry

How Warner Brothers ended up establishing the sound for the film industry

The sound industry was established after no less than a curious chain of events. Back in 1919 three German inventors, Josef Engl, Joseph Massole, and Hans Vogt, patented the tri-ergon process. A process capable of transforming audio waves into electricity. It was initially used to imprint those waves into films strips that, when played back, a light would shine through the audio strip, converting the light back into electricity and then into sound.

The real issue in all this, however, was the amplification of the sound. which would be tackled by an American inventor who played a pivotal role in the development of radio broadcast, Dr. Lee de Forest. In 1906, de Forest invented and subsequently patented a device called the audion tube, —an electronic device capable of taking a small signal and amplifying it. The audion tube was a key piece of technology for radio broadcast and long-distance telephones.

In 1919, de Forest’s started to pay special attention to motion pictures. He realized his audion tube could help films attain a much better degree of amplification. Three years later, specifically in 1922, de Forest took a gamble and designed his own system. He then opened up the ‘De Forest Phonofilm Company’ to produce a series of short sound films in New York City. The impact of his technology was well received, and by the middle of 1924, 34 theaters in the American East Coast had been wired for his sound system.

The fact that a considerable amount of theatres in the East Coast had acquired De Forest system didn’t pick the interest of Hollywood. He had indeed offered the technology to industry leaders like Carl Laemmle of Universal Pictures and Adolf Zukor of Paramount PIctures; however, they initially saw no reason to complicate the solid and profitable film business by adding other features as frivolous as sound. But one studio took a gamble: Warner Brothers.

Vitaphone

Vitaphone was a sound-on-disk technology created and patented by Western Electric and Bell Telephone Labs they used a series of 33 and ⅓ rpm disks. When company officials attempted to get Hollywood’s attention in 1925, they faced the same attitude of disinterest that de Forest had, except for one slightly minor studio: Warner Brothers Pictures.

Courtesy of  Richie Diesterheft  at Flickr.com

Courtesy of Richie Diesterheft at Flickr.com

In April of 1926 Warner Brothers. decided to establish the Vitaphone Corporation with the financial aid of Goldman Sachs, leasing the disk technology from Western Electric for the sum of US $800,000. In the beginning, they wanted to sub-lease it to other studios in hopes of expanding the business.

The studio, Warner Brothers. never imagined this technology as a tool to produce and create talking pictures. Instead, they saw it as a tool synchronize musical scores for their own films. In order to showcase their new acquisition and the feature they had managed to add to their films, Warner Brothers launched a massive US $3,000,000 premiere in the Warner’s Theatre in New York City on August 6, 1926.

The feature film of this premiere was ‘Don Juan’. An amazing musical score performed by the New York Philharmonic accompanied the film, and the whole project was an outstanding success; some critics even went on to praise it as the eighth wonder of the world, which ultimately led the studio to project the film in several American major cities.

However, and despite the tremendous success, industry moguls weren’t too sure about spending money on developing the sound for the film industry. The entire economic structure of the film industry would necessarily have to be altered in order for it to adopt sound —new sound studios would have to be built, new expensive recording equipment would have to be installed, theatres would have to be wired for sound, and a standard sound system process would have to be defined.

Additionally, foreign sales would suffer a drastic drop. At that time, silent films were easily sold overseas. Dialogues, however, was a different story. Dubbing a foreign language was still conceived as a project that would take place in the near future. If studios were to adopt sound, it would also affect musicians who found employment in the movie theatres, as they would have to be laid off. For all these reasons Hollywood basically hoped that sound would be a simple passing novelty, but five major studios decided to take action.

MGM, Paramount, Universal, and Producers Distributing Corporation signed an agreement called The Big Five Agreement. They all agreed to adopt and develop a single sound system if one of the several attempts that were taking place alongside the Vitaphone should come to fruition. Meanwhile, Warner Brothers didn’t halt on their Vitaphone investments.

Courtesy of  Kathy Kimpel  at Flickr.com

Courtesy of Kathy Kimpel at Flickr.com

They announced that all of their 1927 pictures would be recorded and produced with a synchronized musical score. Finally, in April 1927, they built the first sound studio in the world. In May, production would begin on a film that would cement sound’s place in cinema: The Jazz Singer.

Originally ‘The Jazz Singer’ was supposed to be a silent film with a synchronized Vitaphone musical score, but the protagonist, Al Jolson, improvised some lines halfway into the movie. Lines that were recorded and could be heard by the audience. Warner Brothers. liked it and let them in. The impact of having spoken lines, however, was enormous —it marked the birth of what we know today as the sound for the film industry.

An Introduction to Automated Dialogue Replacement (ADR)

An Introduction to Automated Dialogue Replacement (ADR)

When talking about audio sound post-production, we cannot simply forget about automated dialogue replacement, or ADR —the process of re recording dialogue in a studio to replace the dialogue lines that were recorded on set during the production of a film or an audiovisual project.

This can be done for a number of different reasons. First, there may have been a technical problem with the location audio, for example, an airplane flew overhead during the best take, or maybe an actor wasn’t really on access with the mic during another take.

In other cases, ADR is used to replace an actor’s vocal performance, which is especially done in musicals where a professional singer replaces an actor’s voice. Like when Marni Nixon would supply her singing voice to double over Marilyn Monroe, Audrey Hepburn, Deborah Kerr, and Natalie Wood.

Additionally, you may also have to ADR a scene to replace some of the words used in an audiovisual project to make a more television-friendly cut out of it. An example of this is Snakes on a Plane (2006), starring Samuel L. Jackson, where some of his lines were changed altogether, removing all of his swearwords, so that it could fit TV standards.

And sometimes ADR is used for creative purposes. Marlon Brando said once that he mumbled his way through his lines in The Godfather (1972) in order to force producers to ADR his scenes, although that process was known as looping at that time. During the ADR sessions, he was able to truly craft his performance based on the context of every scene and every situation.

In the world of low-budget and independent filmmaking, ADR is traditionally seen as some sort of boogeyman —something to be avoided at all costs. But it shouldn’t necessarily be. In fact, post-production sound, if carried out with purpose, can actually be a crucial tool for the low-budget filmmaker.

ADR in the Historical Context

At the beginning of the sound era, around the 1930s, there was no technology for recording sound separately from the moving images in a visual film. There was no way of dubbing audio, the sound effects, or even the music. When the studios started to transform into master sounds, the brought in hundreds of radio broadcast and telephone engineers, many whom never had shot a film in their lives.

As sound started to become this ‘new thing’, these new engineers started to get more involved in the shooting process, however, their participation altered the stylistic advances gained in silent films in the late 1920s.

Selling sound then became a whole new line of business. Given the impossibility for film producers to have soundtracks separately from the moving images, Paramount took over Joinville Studios in France for the specific purpose of taking the same script of a film and then remaking it up to 12 or 13 times in different languages. They would keep the same sets, props, and costumes and then rotate the actors for each different language version of the script; however, it didn’t work out so well, and Paramount gave up on the idea of multi-language films.

paramount building.jpg

By 1932, and after having given up on the idea of remaking scripts using different actors in hopes of achieving versions of a film in different languages, Paramount kept the technology of dubbing, just when the technology of post-synchronization was just around the corner. By 1935 the position of supervising dubbing engineer was about the same rank as the film editor. By the late ’30s, most of the audio in a studio film was actually done in the post-production process.

This freed up directors from the confines of producing audio, allowing them to focus on the intricate art form that we enjoy today. When dialogue replacement was first introduced, each line had to be re-recorded using a loop of film that would play over and over again, often called looping. Modern technologies use computers to loop a specific section of a film so that actors can deliver the best of their performance.

ADR in Practice (For the Independent Filmmaker)

If you talk to most independent filmmakers they would all agree that ADR is, essentially, something evil. And yes, perhaps, if you’re on a tight budget, having one more unexpected expense, especially because of sloppy locations, is, by all means, a bad thing; however, there are several tricks that allow filmmakers to harness, to some extent, the benefits of ADR.

Using the same mics, the same mic placement, and recreating the environmental conditions of a scene in their digital audio workstations, enables audio professionals to use partial ADR, which ultimately allows them to achieve a much better take of a particular scene that maybe wasn’t as good as it should have been given a number of aspects the production was not able to control during the filming process.

*The images used on this post are taken from Pexels.com

Oscar for Best Sound Mixing and Editing Explained

Oscar for Best Sound Mixing and Editing Explained

In this article, we’re going to be looking at perhaps two of the most confusing Oscars categories: Sound Mixing and Sound Editing. If you’re not familiar with the sound and audio post-production landscape, these categories might seem exactly the same thing; however, there are certain differences, and that’s why we often see a movie nominated for both.

The big thing to think about what’s sound editing and sound mixing is that sound editing refers to the recording of all audio except for music. And what’s audio without music? Dialogues between characters, the sound picked up in whatever scenario a scene was recorded at, and, also, sound recorded in the studio, for example, ADR, extra lines of dialogue, all those crazy sound created to mimic, for example, animals, vehicles, environmental noises, the foley, etc.

Sound mixing, on the other hand, is balancing all the sound in the film or the movie. Imagine taking all of the music, all of the audio, all of the dialogue lines, all the sound effects, the sounds going around, etc., and combining them together so they are perceived as balanced and beautiful tracks.

Some people refer to this last category as an ‘audio tiramisu’, as there are layers and layers of sound that, in the end, compose a beautiful orchestrated group of sounds. Layers of what’s happening in a film’s particular scene and the real realm and layers of what’s happening around it, like in the spiritual realm.

If you recall The Revenant, the American semi-biographical epic western film directed by Alejandro G. Iñárritu that was nominated for several Academy Awards categories including both sound editing and sound mixing, the exemplification of the film’s sound being a total ‘audio tiramisu’ is more noticeable. In the revenant, the sound was so perfectly crafted that it was like if two different stories were taking place at the same time side by side, and you could only distinguish between them by listening.

When it comes to sound editing, take for example another movie, Mad Max: Fury Road, the 2015 post-apocalyptic action film co-written, produced, and directed by George Miller. The movie contains all of these amazing and great recordings of cars, fire, explosions, the really subtle dialogue, which ultimately creates so much contrast between the action and what the characters were really saying. Max, played by Tom Hardy, was actually really quiet, whereas Imperator Furiosa, played by Charlize Theron, was screaming at the top of her lungs, and all of that happened in the middle of the most frenetic action possible. All the audio was used and mixed at the same time.

Having used and mixed the audio at the same time was, in reality, a huge achievement. Rumor has it they used up to 2,000 different channels, meaning they used 2,000 different audio pieces at one time, which is perfectly recognizable at the opening car chase sequence, allowing you to perceive how much sound was being used. The movie, in the end, managed to mix all the dialogue, the quiet dialogue, the effects, the action, the environmental sounds, etc., and to use it all together.

The Process Deconstructed

The relationship between sound mixing, sound mixing and storytelling, however, is perhaps the cornerstone of the whole audio post-production process. How audio design and sound mixing can be used to help storytelling, specifically in the films, is the main question that audio technicians strive to answer.

movie making.jpg

First, they approach both practices thinking how they can make the tracks sound better, and then how they can add to the story —make the audio tell the story, even if you don’t specifically see what’s going on. In terms of sound design, the whole idea behind this creative process is coming up with key takeaways regarding what is the purpose of the scene, or whether or not there are specific things that don’t appear in the moving images but still are ‘there’ and need to be told.

After having analyzed the scenes in terms of what can be done to improve the general storytelling, audio technicians start to balance the dialogues track by track, which is, of course, a process that takes several hours. Is it necessary to add the room tone? Is it necessary to remove it? Those type of questions normally arise during this part of the process. Afterward, the EQ part starts.

The EQ is normally that part of the process where audio technicians do a little bit of clean up by changing the frequencies of the sounds the audience will hear in order for them to hear them clearer and better. This is important in terms of the storytelling because by using an equalizer, audio technicians can add textures to the voice and the sounds people will hear, which is of course what the whole storytelling is about.

*The images used on this post are taken from Pexels.com


Oscar For Best Sound Editing: ‘Bohemian Rhapsody’ - How Freddie Mercury’s Voice Was Achieved

Oscar For Best Sound Editing: ‘Bohemian Rhapsody’ - How Freddie Mercury’s Voice Was Achieved

‘Bohemian Rhapsody’ was crowned with Best Sound Editing one week ago during the last installment of the Academy Awards. Regardless of whether that was real life or just fantasy, the truth is the film took home several Oscars figurines, including Best Sound Mixing as well.

When it comes to the film itself, actor Rami Malek delivered a really compelling performance as Queen frontman Freddie Mercury, especially when the character was singing —and that’s because the singing, for the most part, was actually “performed” by the real Freddie Mercury. Additional singing, especially for the non-live concert scenes, was performed by singer Marc Martel.

Those two voices together were responsible for Malek’s convincing portrayal of Mercury. Putting them together and then making them come out of Malek’s mouth required more than simply a well-carried out editing process. According to editor Nina Hartstone, the editing for only the singing parts almost took open heart surgery-style editing.

In order for the production to recreate the concerts in the film, supervising sound and music editor, John Warhurst, had to do the unimaginable to capture Queen crowds stomping and clapping in unison for the iconic ‘We Will Rock You” whilst singing along with the band’s Live Aid famous set, which is essentially the cornerstone of the storytelling in the film.

The duo, Hartstone, and Warhurst, recently discussed in an interview how they were able to craft the incredible and massive Live Aid set, and the lengths they had to go in order to recreate the recording session at the Rockfield Farm. They also talked about capturing the vast majority of custom sounds, like the ones we hear during live presentations, concert crowds, and extra material.

When asked about whether or not Malek participated in the singing parts, Warhurst stated clearly that the vast majority of the singing is, of course, Freddie Mercury, as it seemed right to use his own voice to keep his spirit in the film. He also said that, when production was first putting together the script, ever change also demanded a change in the vocals, which ultimately forced the sound editing crew to come up with ideas on how they could edit them. Every time they had a version of Freddie Mercury in a multi-track format (with the vocals in separate tracks) Warhurst and Hartstone would use them.

During the filming process, Harstone asserted, in order for them to get it to look like Malek was actually performing the songs like Freddie Mercury does, the sound editing crew had to tell the actor that he would need to sing the material on set with loads of energy. Malek was responsible for giving more than he was capable of when it came to singing, which would have been fine if they had invested the same amount of time the band did during the Live Aid set (20 minutes); however, that particular scene demanded production and the sound editing crew a staggering two weeks to get it right. Malek had to sing at the top of his singing capabilities for almost fifteen days, take after take.

From the Live Aid takes, the sound editing crew got a lot of Malek’s breaths and his movement sound. Those were put together with Mercury’s vocals and combined the two, which, in the end, yielded Nina Hartstone an Oscars nomination and the prize, as she was solely responsible for that. Harstone said several times that she wanted to achieve the highest degree of realism. Aside from wanting it to look that Malek was actually singing, Hartstone went on to use a lot of the actor’s breaths whilst he was performing on set and also from subsequent material the duo did with him in between takes.

Bohemian Rhapsody Sound Editing.jpg

Breaths, efforts, lip sounds, and other tiny sounds were combined with both Martel and Mercury’s vocals, tying them into the final picture, finally making it look like it was actually Malek singing. Such efforts and display of resources allowed the movie to stand out amongst the other films in this category.

Throughout the film, there are plenty of scenes, aside from the Live Aid, that also required the best of Malek, Warhurst, and Hartstone. For those scenes where there simply aren’t a recording of Freddie Mercury, like when he’s singing Happy Birthday or Love Of My Life, the sound editing crew had to surgically mix three different voices: Mercury, Martel, and Malek.

In order for them to mix the voices, the duo described this process as open heart-style editing. Both Hartstone and Warhurst had to go deep into the waveforms to get the transitions to work, paying special attention to detail. As for the tools used, it’s been said that iZotope RX played a vital role helping them get the EQs to match; however, even before using software and bringing other tools into it, the sound editing crew had to get the whole editing to feel right.

*The images used on this post are taken from Fox Movies Gallery

The Sound of An Oscar Nominee: A Star Is Born

The Sound of An Oscar Nominee: A Star Is Born

Have you ever wondered what it takes to craft a compelling sound? What techniques and technologies behind sound have been used for sound professionals to hit the spotlight and be recognized by the industry? Now that The Oscars are around the corner, a lot of conversations start to arise, especially about the nominees.

In this installment, we’re gonna go through the sound of A Star Is Born, as the movie has been nominated for best sound mixing. Steve Morrow, who later offered some behind-the-scenes insights at recording Lady Gaga and Bradley Cooper, was responsible alongside Tom Ozanich, Dean Zupancic, Jason Ruder for this part of the audio post-production process.

In a recent interview, sound mixer Steve Morrow said that both Gaga and Cooper wanted the film to have a particular style of sound: they wanted it to sound as if it was a live concert, which makes sense given Morrow’s experience in shooting at live concert venues like the Glastonbury festival; however, the request really ended up posing a real challenge: “In Glastonbury, we all went in there believing we had almost eight minutes to shoot, but we later found out the festival was actually running late so they only gave us like three minutes,” Morrow said.

The sound mixing crew asserted later on that the idea was to film three songs, but given those circumstances, they decided to play 30 seconds of each of those songs. As for the sound mixing process, Morrow also mentioned that the idea at the very beginning of the process was to capture all sounds live, all the performances, all the singing, etc., which ultimately ended up in a Lady Gaga mini show, as the music wasn’t amplified in the recording room.

Such conditions led Morrow to assert that his role on A Star Is Born differed a bit from a more typical production. On a normal set, it is the production’s responsibility to record lines of dialogue while filming all environmental or sound effects that would be happening at the same time during the filming process. During A Star Is Born, Morrow and the rest of the sound mixing crew had to do all that process whilst also recording the band and the live singing, making sure they had captured all the tracks.

After that, the team would hand those tracks to the editorial and the post-production crew. Sound people would then take all that information, mix it down accordingly, and that’s practically what you hear in the film. Nothing else.

As for the tricky part of the film, filming the live concert, Morrow took a rather uncanny approach to get those tracks. In the movie, the sound crew had to film twice at a real concert: Stagecoach and Glastonbury. The crew had to take advantage of the time between acts, and as soon as Willie Nelson was expecting his curtain call to come on stage, Morrow and the crew make the most out of the eight minutes they initially had to get the tracks.

Image from http://www.astarisbornmovie.net/#/Gallery/

Image from http://www.astarisbornmovie.net/#/Gallery/

What they would do, according to the mixing crew, which was ultimately different from all the other recordings they carried out in controlled spaces, is they would approach the monitor guy with some equipment and take a feed from the monitor through the mic Bradley Cooper was supposed to use.

Most of the time, they would do a playback of the band through the wedge —the small speakers a performer standing in front in live presentations. Morrow and the rest of the mixing crew would then put those playback tracks through so that Bradley Cooper could hear them, but the crowd couldn’t as they were standing far enough away from those speakers. So, in a nutshell, what they did to record the live concert scenes was to have Bradley Cooper singing live whilst hearing a playback of the instruments through the wedges.

An additional challenge was making sure not to amplify any of those tracks and performances, as Warner Bros. didn’t want the music to be heard by the crow in order not to risk losing impact. Such demands forced the mixing crew to mute practically everything as much as they could, which was also different from the way film producers film in different and controlled locations.

The fact of having a big crown in front makes the process way more challenging: the whole crew, film, picture, sound, etc., only have a few minutes to shoot, which increases the chances of not getting a lean and clean sound. In controlled scenarios, sound crew normally record up to ten different tracks, whereas in front of a live audience, they would need not only to prevent tracks from being heard but also to record the live audience for the desired effect.

Dialogue Editing and ADR With Gwen Whittle

Dialogue Editing and ADR With Gwen Whittle

If you recall the movies Tron Legacy and Avatar, they both, aside from having received Oscar nominations, have one name in common: Gwen Whittle. Gwen is perhaps one of the top supervising sound editors working today, which is why a lot can be learned from her work.

Gwen also did the sound supervision for both Tomorrowland (starring George Clooney and Hugh Laurie) and Jurassic World (starring Chris Pratt), and although she’s known for overseeing the whole sound editing process, she’s mentioned in several interviews that she’s highly fond of paying special attention to both dialogue editing and ADR sessions, as mentioned in previous articles by Enhanced Media in our blog.

Dialogue editing, as mentioned by George Lucas back in 1999 just before Star Wars: Episode 1 hit the theaters, is a crucial part of the whole sound editing landscape, and, apparently, even within this industry, nobody pays enough attention to it. In fact: dialogue editing is the most important part of the process.

So, what’s dialogue editing?

Dialogue editing, if it’s done really well, is, according to Gwen Whittle, unnoticeable —it’s completely invisible, it should not take you out of the movie, and you should pay no attention to it. Imagine taking all the sound from the set, take by take, just to take a much closer look at the dialogues captured for a specific scene.

Of course, not all dialogues recorded on the set sound the same —maybe the take was great, the acting was great, the light was great, but suddenly a truck was pulling over and an airplane happened to fly over the crew. It’s practically impossible to recreate that take as there are many aspects involved: air changes, foreign sounds, etc., and no matter how much you try to remove all those background noises, sometimes you need to resort to the ADR stage. In an ADR session, it all comes down to trying to recreate the same conditions that should apply to that particular scene.

Cutting dialogue often poses several challenges to sound editors, and it highly depends a lot on the picture department. A dialogue editor receives all the production from the picture department, everything that was originally shot on set, making sure that each mic has its own track. It’s the responsibility of the picture department to isolate each mic with its own track so dialogue editors can do their magic.

On set, the production sound mixer is recording anywhere from one microphone up to eight, usually, sometimes more, but the idea is for each actor to have their own mic and at least one or two booms. All this mix is passed onto the dialogue editing crew, isolating each track, matching the moving images just like the movie is supposed to be.

Once the dialogue editing crew has received the tracks, they listen to them and assess which parts can be used and which parts need to be recreated, organizing which tracks will make it to the next stage. Sometimes, since dialogues can be recorded using two different microphones such as the boom and the talent’s personal mic, sound editors can play with both tracks trying to make the most out of it whilst spotting which parts require an additional ADR session.

If there’s a noticeable sound, like a beep, behind someone’s voice, a dialogue editor can really get rid of that in case they need to; however, that’s not always the case. ADR sessions are quite familiar with the sound editing process. In films with a smaller budget, the dialogue process gets a bit trickier, since normally all tracks aren’t passed isolated onto the dialogue editing crew, so they need to tackle any hurdle in their tracks. Low budget films normally include more dialogue as they don’t have the resources to either afford fancy sets or include fancy visual and sound effects.

Do directors hate ADR?

Well, according to Gwen Whittle, not many directors are fond of ADR. David Fincher, for example, is. ADR is a tool. A powerful tool. And if you’re not afraid to use it, you can really elevate your film because it takes away the things that are distracting you from what’s going on.

ADR and dialogues.jpeg

Actors and actresses like Meryl Streep love ADR sessions because is another chance to perform what they just did on set. They see ADR as the opportunity go in there and try to put a different color to it, and it’s another way to approach what the picture crew just got on a couple of takes on set. Many things can be fixed, and even alter several lines. You can add a different twist to something. In fact, even by adding a breath to something, you can change the nature of a performance. It’s the opportunity for both the talent and directors to hear what they really want to hear.

*The images used on this post are taken from Pexels.com

4 Services That Allow Audio Post-Production Collaboration Seamless

4 Services That Allow Audio Post-Production Collaboration Seamless

Collaboration is not foreign when it comes to audio post-production. In fact, it is what gives studios constructive feedback, ideas, solutions and different perspectives to work on altogether, helping all parties involved produce better pieces of work.

Audio, sound, and video collaboration happens all the time. When it comes to audio and sound, for instance, it has never been so plausible to write a song with another individual on the other side of the world or to hire a full orchestra or session musicians to record music for the score and original soundtrack purposes.

In this post, we address some services and other software that make the whole collaboration workflow much easier, but more importantly, productive.

The Audio Hunt

The Audio Hunt is best known for being an online collaboration platform where hundreds of studio owners and audio professionals make their gear available for other colleagues to run their tracks through. How does it work? Imagine you want to run your mix through a specific piece of equipment/software. You will then be required to, first, open a account, find the piece of hardware you want to use, start a chat with the vendor, book the job depending on the fare (fares and fees vary depending on what type of hardware/software you want to use), and, finally, wait for the service to be completed so you can download the files.

Pro Tools Cloud Collaboration

Not long ago, Avid introduced Cloud Collaboration for Pro Tools in the Pro Tool 12.5 version. This allows Pro Tools users to share parts of projects, or the whole project if necessary, with other Pro Tools users around the globe without even having to close the application. It’s a rather fancy system that seamlessly integrates between different Pro Tools versions.

audio post production.jpeg

Pro Tools Cloud Collaboration gets rid of the traditional audio post-production collaboration process that involved exporting files out of the application followed by sharing them on different cloud services for other collaborators and editors to receive. Now, the 12.5 and above allows editors to collaborate with other Pro Tools users in a much quicker and simpler way.

Source Elements Source-Connect

In case you’re wondering what is Source-Connect, Source-Connect is what replaced the ISDN. Conceived as an industry-standard replacement, Source-Connect comes with a solid set of features for remote audio and sound recording and monitoring, allowing audio and sound professionals to undertake several aspects common in the audio post-production industry such as overdub, ADR and voice-over, regardless of whether the origin of these files took place anywhere in the world, over a decent internet connection integrated to their digital audio workstations.

Source-Connect works as an application, and it does not require complex digital audio workstations setups. It allows audio and sound professionals to work directly in the DAW of their preference, which ultimately allows them to harness the full set of features the application comes with.

Besides, Source-Connect comes with a built-in Pro Tools support, which is also compatible digital audio workstations that almost exclusively support VST plug-ins, including, but not limited to, Cubase, Nuendo, Pyramix, etc.

Audiomovers LISTENTO

Listento allows users to move low latency audio files from Digital Audio Workstations (DAW) to browse through the use of plug-ins. Imagine having a client who cannot physically visit your studio to listen and give you their insights on the final mix you’ve developed. By using Listento to play the mix directly from your workstation master track to the client’s browser, you eliminate such complication.

Listento seems to be still under development. One of the things the software is working on is the future implementation of a built-in chat to communicate with your client, allowing you to move away from third-party app messengers such as Skype or Google Hangouts to discuss the intricacies of your mix with the other individual.

Listento includes several transmission formats, such as:

  • PCM 16Bit

  • PCM 32Bit

  • AAC 128Kb

  • AAC 192Kb,

  • AAC 256Kb (MacOS only)

  • AAC 320Kb (MacOS only)

Additionally, Listento is a free plug-in; however, in order for sound professionals and audio editors to use it, they will be required to subscribe to Audiomovers in order for them to stream audio files directly from their digital audio workstations. Lucky enough, Audiomovers subscription tiers are quite affordable:

  • Weekly: $3.99

  • Monthly: $9.99

  • Yearly: $99.99

When sharing your files, sign up to your Audiomovers account to both send and receive the live stream. Send your client a link like if you were sharing with them a Google Sheets download link. And in case you’re still wondering whether you should pay one Audiomovers tier of service, the software comes with a one-week free trial.

A final word on collaboration: the fourth industrial revolution has come indeed with many pieces of software and hardware that has made possible to collaborate between professionals and studios. It is nonetheless as important to always nurture the collaborative spirit by being willing to work alongside other professionals in a specific workflow. This, of course, demands a more proactive and receptive attitude towards collaboration, otherwise, by not consider other perspectives, the chances of developing and learning something new are lower.

*The images used on this post are taken from Pexels.com

Sound For Documentary

Sound For Documentary

Since the emergence of the sheer array of affordable camera recorders, the rising prevalence of mobile phones with decent video cameras and the ubiquity of social media channels such as YouTube as one of today’s major media diffusion channels, it has never been this easy to produce and subsequently sharing documentary videos. If we were to take a much closer look at the whole production process, it would be easy to assert that sound is the weakest part of many of these videos. Although it is relatively easy to shoot and record with a camera regardless of its quality, the art of placing a microphone, monitoring and taking care of volume levels still remains an ambiguous puzzle compared to the other components that take place when shooting a video documentary.

In today’s post, we going to go through a general outline of practical techniques and an end-to-end guide to the primary tools for recording, editing and mixing sound for documentary audiovisual projects. Whether you are using a mobile phone, a regular video camera, a D-SLR, prosumer or a professional camcorder for shooting your project, the sound will always be an important part of the storytelling.

There are many ways in which tremendously good results can be achieved with consumer gear in many different circumstances; nonetheless, professional gear comes with extra possibilities. Here are some fundamental concepts directors and documentary producers need to bear in mind every time they want to take one of these projects.

Sound, as a conveyor of emotions - Picture, as a conveyor of information

Documentary shooting.jpeg

Think of the scene in Psycho of a woman taking a shower in silence. Now add the famous dissonant violin notes, and you get a whole new experience. That leads to consider the emotional impact of a project, in this case of a scene in particular. Sound conveys the emotional aspects of your documentary. It’s practically the soul of the picture. Paying special attention to sound, both during shooting and afterward in the studio, can make the real difference. No matter if you’re planning on doing a simple interview with plenty of dialogue, an enhanced, or rich sounding, in this case, the human voice is the differentiating factor between an amateur and professional project.

Microphone placement and noise management are key

The main issue with the vast majority of amateur sound recordings is the excessive presence of ambient and environmental noises from all kinds of sources, and a low sound level relative to the ambient noise. As a result, we’ve all seen how difficult it is to understand the dialogues, which is ultimately detrimental to the intended emotional impact. This common situation is one of the consequences of poor microphone placement. Directors and producers need to learn to listen to the recording and experiment with different microphones and different placement options. It all boils down to getting the microphone as close as practical to the intended sound, and as far away as possible from the extra noise that interacts in a negative way with the whole recording.

Additionally, if the documentary takes place outdoors, the chances of getting unwanted wind noise are hight, which is why the use of a windjammer to control wind noise is always a good idea. Regardless of whether you’re a professional or an amateur taking on a documentary audiovisual project, with a little bit of practice and research, you can craft outstanding sound recordings, irrespective of whether you’re recording with professional gear or your mobile phone.

Monitor your recording

In order to craft a compelling and professional recording, you need to properly set recording levels first —not too soft so sound doesn’t get lost in the overall noise; not too loud so you can avoid possible distortion. When recording, always monitor the sound you’re getting with professional headphones in order to avoid possible surprises in the edition. When using digital recording devices, it’s impossible to record anything beyond full scale, so abstain yourself from crossing this limit, as otherwise, the recording will sound hideous, unless your camera or the device you’re recording with as an automatic gain control to adjust recording levels.

The shotgun myth

There seems to be a myth regarding microphones. Apparently, some people firmly believe that the shotgun microphone reaches farther than other devices. This is not true. Shotgun microphone simply does not work like a telephoto lens. Sound, unlike light, travels in all directions. Of course, shotgun microphones work; they have their place, and they really come in handy in somewhat noisy environments, especially when you cannot be as close as the individual doing the talking as you’d like in an ideal scenario. That being said, shotgun microphones are far from performing magic. What they really do is that they respond to sound differently in terms of reduced level, null point, and coloration. Although they look impressive, plenty of sound professionals and directors choose to use different types of microphones for their documentary project.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 3

Mixing Audio For Beginners - Part 3

Here is the third installment of Mixing Audio For Beginners. If you’ve been following this illuminating compilation of the intricacies and the basics of sound and audio post-production, we’re gonna be addressing further topics taking it from where we left off in the last post about Mixing. Otherwise, we suggest you start off right from the very beginning. So, without further ado, let’s continue.

Ambiance

We mentioned last time that when editing dialogues in a studio through ADR, it is no less than pivotal to create the right environment for recording new lines. Every time a sound professional is tasked with re-recording lines and additional dialogue in a studio, they always have to pay special attention to several aspects that, if overlooked, could ruin the pace of the scene. Each dialogue edit inevitably comes with several challenges, like the gaps in the background environmental sound.

There’s nothing more unpleasant than listening to audio or a soundtrack where the background ambiance doesn’t match the action going on from one scene to the other. This phenomenon is highly common during ADR sessions, which is why, aside from helping the talent match the intensity each shot requires, sound professionals also need to edit the background sounds to fill any possible hole in order for the scene to feel homogenous.

The problem is when the production sound crew captures room tone on a specific location and then, once production is finished, the audio post-production crew needs to replace dialogue and fill the holes with room tone. Of course, there are tools to recreate room tones based on noise samples taken from existing dialogue recordings; however, it is indeed one of the most common tasks under the umbrella of audio post-production.

Sound Effects (SFX)

sound effects.jpeg

Whether coming across the perfect train collision sound in a library, creating dog footsteps on a Foley session, using synthesizers to craft a compelling spaceship pursuit, or just getting outside with the proper gear to record the sounds of nature, a sound effects session is the perfect opportunity for sound and audio professionals to get creative.

Sound effects libraries are a great source for small, and even low-budget, audiovisual projects; however, you definitely must not use them in professional films. Some sounds are simply too recognizable, like the dolphin sound every single time a movie, ad or TV show, shows a dolphin. Major film and TV productions use teams to craft and create their own idea of sound effects, which ultimately becomes as important as the music itself, for example. Think about the lightsaber sounds in any Star Wars movie.

After that, additional sounds can be created during a Foley session. Foley, as discussed in other articles, is the art of generating and crafting sounds in a special room full of, well, junk. This incredible assortment of materials allows foley artists to generate all kinds of sounds such as slamming doors, footsteps in different types of surface, breaking glass, water splashes, etc. Moreover, foley artists recreate these sounds in real time, which is why it is normal to have several takes of the same sound in order to find the one that best fits the scene —they are shown the action in a large screen, and then start using the materials they have at hand in order to provide the action with realistic sounds. Need the sound of an arm breaking? Twist some celery. Walking in the desert? Use your fists and a bowl of corn starch.

Music

Just like with sound effects libraries, when it comes to music, sound professionals have two choices based on their talks with production —they can either use a royalty-free music library, or they can, alongside music composers, create a score for the film entirely from scratch. Be that is it may, the director and productions are the ones who have the final say over what type of music they want to use in the project and, perhaps more importantly, where and when music is present throughout the moving images.

Sometimes video editors resort to creating music edits to make a scene more compelling. Other times, it’s up to sound professionals to make sure the music truly fits into the beat and goes in accordance with what is happening. The trick is to make the accents coincide with the pace of the on-screen moving images as the director instructed, and that music starts and ends where and when it’s supposed to.

Mixing

Assembling all the elements mentioned in the first two parts of this mini guide and this article into a DAW timeline and balancing each track and different group of sounds into a homogeneous soundtrack is perhaps where this fine art reaches its pinnacle. Depending on the size of the studio, it is possible to use more than one workstation and different teams working together simultaneously to balance the sheer array of sounds they’ve got to put in place.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 2

Mixing Audio For Beginners - Part 2

According to the previous article, we mentioned the importance of establishing an intelligent workflow in your audio production process. As per defined by the dictionary, the word workflow means “the sequence of processes through which a piece of work passes from its initial phase to total completion.” Such definition, of course, can be integrated with the audio post-production workflow phases in order to see how they work in different types of productions.

Pre-Production

A pre-production reunion is the meeting that gets you together with the production officials, whether it is the production company, director, or the advertising agency before the production starts. If you happen to be invited to this meeting, you can, of course, express your opinions to the production team, which might even save them hours and effort. If they seem to be open to receiving additional creative input, you could help develop the soundtrack at the concept phase. It means that your insights on the project can also have a certain impact on selecting the audio budget, which is always a positive thing. Remember: an hour of proper pre-production will spare you 10 hours of possible setbacks.

Production

Makeup artists make their magic, services are consumed, lights are turned on, actors deliver their best performance, video is shot, audio is recorded, computers are then used to animate existing action sequences, etc., and the pretty much the whole budget is spent during this phase.

Video Editing

Once the visuals have been recorded and created, the director works with the video editor in charge to pick the best footage and assemble the moving images in a way that tells a compelling story. Once the editing has been done, the audio editor or sound engineer will receive a finished version of the audiovisual project that, in theory, will not suffer further changes —that’s known as “picture lock.” This final version of the recorded footage can only be achieved once the deadlines have been met and the budget for those processes spent.

Creating The Audio Session - Importing Data

The video editor is responsible for passing onto audio professionals an AAF or an OMF export compiling all the audio edits and additional media so they can re-create, or create from scratch, their own audio edits. Once sound editors and audio professionals import the files, they will have a much clearer idea of what they’ve got to do.

At this point, audio editors also import the moving images and the edited video, making sure they are in sync with the audio from the aforementioned exports (AAF and OMF).

Spotting

During this phase, both the director or the producer sit down with audio professionals to tell them exactly what they want and, more importantly, where they want it. The entire film or video project is played, so audio professionals can take notes regarding the dialogues, the sound effects, the score, and the music, etc.

Dialogue

Dialogue is perhaps the most important part of the entire soundtrack. Experienced audio editors will always separate dialogue edits into different tracks, one per each actor. Sometimes, when audio is recorded on location, the audio person responsible for recording those tracks often records two different tracks for each actor —a clip-on mic and the boom mic. Once in the studio, the audio professional assesses both tracks and chooses the one that sounds best and is more consistent throughout the entire length of the moving images.

In case of coming across noise on the dialogue tracks, a common technique that sound editors employ is using noise reduction tools or similar software to repair that audio without compromising the final mix.

ADR

We’ve covered ADR before in previous posts, just in case you don’t know what ADR means.

Shooting film and ADR.jpeg

If, after having used the techniques mentioned in the last paragraph, the audio cannot be repaired through the use of noise reduction software, audio professionals always resort to performing ADR.

ADR means having the actors and the talent go to the studio to carry out several tasks, such as:

  • Replace missing audio lines

  • Replace dialogue that couldn’t be saved

  • Provide additional dialogue in case of further plot edits.

Actors have projected their scenes so they can recreate their lines. Normally, a cue is used to make sure they record in sync with what’s going on in the film. They also do four or five takes in a row, since the scenes are projected in a loop over and over (hence the word looping). The sound editor or audio professional then picks the best line and the best performance and replaces the original noisy/damaged take with the newer version. In order to match the intended ambiance, sound editors may use the same mich as the original take, but they will likely have to use further equalization, compression, and reverb to make the new performance be in synch with the timbre.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 1

Mixing Audio For Beginners - Part 1

Have you ever wondered why your favorite films or TV shows sound so good? Or why TV ads and commercials are sometimes so much louder than other films and TV series? Or why that internet video that you like the sound so bad?

In this mini-guide, we want to go through the intricacies commonly associated with the creation of sound, audio, and soundtracks for both video and film. Crafting and mixing audio for film and video is a rather profound issue; covering all the basics would take hundreds of pages, due to the constantly changing nature of this business and the technology involved.

This first part covers basic aspects, a bit of background, some terms and terminology, and hopefully, will serve as a clear guide to understanding what mixing audio for video and moving images is about.

The World Of Audio For Video

Way back in the ages of the past century, recording engineers would often face a daunting dichotomy: they often had to make a career choice between either producing music or producing sound and audio for visuals and moving images, such as TV series, Ads, Films, etc. Since the aforementioned career choices were considered specialized assignments, they demanded specialized tools get everything done.

The inclusion of computerized digital audio systems in the late 80s made it possible, and definitely much easier, to use the exact same recording tools to produce and edit both music and soundtracks. Perhaps, if you’ve had any experience with audio post-production, tools, and systems such as AVID, NED PostPro and the early pro tools might ring a bell. That era marked the beginning of a new dynamism where terms such as convergence —where the lines of both worlds of audio and video production intertwine— started to become popular. As a result, the vast majority of engineers had to learn to do audio post-production sessions during the day and music sessions at night.

Be that as it may, the process has undoubtedly evolved throughout the years, and the modern and contemporary process of audio post-production has changed more than ever before.

Types Of Audio Post Production

In order for us to discuss the types of audio post-production, we need to start by making a necessary distinction between what is commonly referred to as audio and other types of soundtracks like radio commercials, audiobooks or the well-known podcast. Though a lot falls under the umbrella of audio post-production, we commonly mean by audio post-production as the audio especially crafter for a moving image or a visual component. Here are the most traditional forms:

Television

TV shows can be practically any length, but the vast majority of US TV programs are intended to last between 30 to 60 minutes. Many are produced by highly qualified and experienced TV studios in Los Angeles. As for Reality Shows, although these can be shot and recorded anywhere, they also require a good and experienced audio post-production team to mix both audio and video in a professional fashion.

Film

film making.jpeg

Films vary in their nature. Short films can span just a few minutes, whereas longer films can last several hours. This category includes today’s production for Netflix, HBO, and Amazon, as well as the famous traditional major studios. When talking about a film, it is also important to mention the financial aspect: independent filmmakers, known for producing small to no-budget projects still require an important dose of audio post-production. In fact, many sound engineers are fond of taking on these projects as it serves as the perfect opportunity to get some training prior to taking the big leap.

Commercials

Commercials include several types of visual projects. The term “commercials” often refers to TV commercials, infomercials, ads, promos, political ads, etc. The nature of the aforementioned types of commercials is basically known for its rather short format —today, it is possible to come across commercials ranging from 5 to 60 seconds in length. There are of course much longer commercials; however, it is rather expensive pretending to buy airtime for something longer than sixty seconds.

Video games

Video games are extremely fun. And crafting audio for video games is even funnier. The vast majority of top-quality games, also known as AAA games, have behind a dedicated audio post-production team responsible for creating and capturing the sounds that will be included in the game. This, of course, is absolutely unique to every single game, and certainly demands a daunting amount of work, requiring hundreds of audio files, as the game will demand soundtracks in different languages, which ultimately increases the number of files the audio team will need to manage.

Audio Workflow

The process through which a piece of audio work completes initiation to completion is known as a workflow. And although we will get into more detail in a subsequent post, a traditional audio workflow is comprised of the following stages: pre-production, production, video editing, data import, spotting, dialogue, ADR, ambiance, sound effects, music, mixing, delivery, summary.

*The images used on this post are taken from Pexels.com

 ADR: Tips And Tricks

ADR: Tips And Tricks

Automated Dialogue Recording, or ADR, is an essential part of every audiovisual project, but knowing its intricacies is key when it comes to becoming a proper filmmaker. ADR, as many people like to call it, is basically a method of adding dialogue to an already filmed scene. By superimposing dialogue that has already been recorded in a studio, or at least in an acoustically treated room or space, filmmakers can get past the challenges commonly associated with location dialogue. The problem with location dialogue is that it oftentimes results a bit hectic when environmental noises are too high and difficult to mute, the equipment doesn’t work the way it is supposed to do, or when the crew cannot get the right background noise.

When it comes to films, almost every contemporary Hollywood film has 50% to 70% ADR dialogue. ADR is no less than pivotal for the success of any film, and if executed the right way it can definitely salvage an entire film.

The Basics Of ADR

Before we get into more detail, there are several elements associated with ADR that filmmakers must bear in mind so they can plan and set up their recordings properly. By looping, existing playback of a repeating loop from the project is given to the recording crew while simultaneously recording new voices and dialogue. There are two different types of looping: audio looping and visual looping. With the latter, an actor listens to the location take or recording several times to understand the nature of that scene in particular and get a feel of the situation prior to recording the new dialogue. Once they’re ready to record, they will not hear the location take but will take a look at the scene to match lip synchronization. They always hear themselves over the monitors so they can hear the lines they’re delivering in real time.

Audio looping, on the other hand, will traditionally produce the most desirable outcome. However, it is important to mention, it is normally more time demanding. The session is carried out the same way as visual looping, cutting the video monitor and hearing the original dialogue track. The vast majority of ADR engineers are fond of using both techniques simultaneously. They always break up the looped lines into much smaller parts so they don’t lose consistency and synchronization. As for synchronization, for better sync when starting a line, ADR engineers record three beeps exactly one second apart each, so actors know when the first voice starts. This is known as an audio cue; like a metronome, so actors can start in the right moment under the proper rhythm of the line being recorded.

An ADR Recording Space

In sound and audio post-production, filmmakers have essentially more control over audio than they do when recording on location. The basic goal of each audiovisual project is to provide the audience with lots of experiences, and audio is not the exception. When it comes to ADR, the main idea is to get a really clear and clean ADR recording so ADR engineers can put the dialogue in an acoustically treated space with proper equalization.

ADR Equipment And Gear

microphone-audio-computer-sound-recording.jpeg

When recording ADR in an acoustically treated space such as an audio post-production studio, sound engineers and ADR professionals often try to use the same microphone the filmmaking unit used on location to capture the existing and original dialogue. The goal of ADR is to compellingly and adequately match the lines in both tonal characteristics and frequency response to the lines recorded on location. Since all microphones have different polar patterns and different frequency responses that yield different tonal nuances, it’s important, not only to use the exact same microphone—or at least a similar one—, but also to place them properly so acoustic features don’t get lost.

There are several digital audio workstations such as Pro Tools, Ableton Live, Logic, etc., that can help ADR engineers loop their recordings according to their needs. ADR demands, aside from microphones, other audio production software. A basic ADR toolkit looks like this:

  • Microphones

  • Digital Audio Workstation

  • Headphones

  • Preamp or Interface

  • Video Monitor

Microphone Placement And Delivery

Mic placement depends heavily on what type of microphones are being used. It is key to maintain a certain distance between the mic and the actor or actress to provide the recording with realism. Also, some ADR engineers are fond of using filters when deemed necessary. How an actor or an actress delivers the line is also pivotal for the success of the recording, as it affects the delivery itself and the tone of the ADR recording. Some actors tend to replicate the same movements being projected in the moving images, as it aids them in creating the exact same mood the filmmaker wants for that specific scene.

*The images used on this post are taken from Pexels.com

6 Tricks For Foley Sound Effects

6 Tricks For Foley Sound Effects

Foley artists are pivotal for any audiovisual project once it has been shot and edited, as they’re responsible for taking care of any possible missing sound, and, as described in a previous article, a crucial step in the audio post-production process is also what foley artists can do: perform and create sound effects to match the moving images being projected on the screen.

Common sound effects we always hear in movies for example footsteps, chewing, drinking, clothing movement, doors being opened, keys jingling, etc., are created through a set of different recording techniques and materials. Foley is more than simply manually editing sounds. In fact, it not only is more than that, but also more time efficient, and provides audiovisual projects with a much richer character and realism to other sounds in the film. Whenever a foley artist can’t create a sound in the studio, sound designers and sound editors will be always up for the task.

That being said, have you ever wondered what’s the best way to mimic or recreate the sound of a fight? The sound of fists going back and forth and hitting another body? Or how can you recreate the sound of footsteps in a snowy road in a recording studio? What’s the best way to mimic a sword fight? Here are some tips for coming with foley sound effects:

HOUSEHOLD SOUNDS

Wooden Creaks And Floors

People stepping on creaking wood and squeaking floors appear in practically every film you’ve seen. Footsteps on old floors or people walking over an old house porch are perhaps one of the most used scenes in films. Foley artists have at their disposal a sheer array of floors and objects to recreate these sounds. The advantage of using these accessories is that the sound, in this case, the creak or the squeak, can be to some extent controlled. Once Foley artists have developed a proper technique, coming up with these sounds and performing these creaks saves the picture a lot of time, as sound editors won’t need to edit all sounds on Pro Tools.

Fire

Fire is one of those sounds that also always appears in the vast majority of films. Foley artists often resort to accessories such as cellophane, potato chip bags, and even steel wool. The most common technique for recreating fire sounds is to scrunch up the accessory and then release it; the effect will be, of course, rather subtle, but when recorded with the mic closely a somewhat low-level fire sound will be achieved.

Cash

cash sound.jpeg

Money and stacks of cash have their own sounds as well. Traditionally, whenever a foley artist has to develop the sound of cash, they often resort to an old deck of poker cards or book pages. In order for foley artists to successfully achieve this sound is to use accessories, in this case, paper sources, with flexible and softer textures. In fact, the vast majority of the time, foley artists add actual bills in the middle of the paper roll, or on the top, or on the bottom, so they fingers actually brush its surface, creating the sound of cash.

ANIMALS

Horses

Galloping horses is one of those sounds whose technique to achieve it has practically remained untouched. Foley artist normally uses coconuts to recreate horse hooves, and it’s probably the most well-known foley accessory thanks to Monty Python and The Holy Grail. Several foley artists suggest stuffing the half coconut with some materials such as fabric in order to get a more realistic sound. Then, hit a compact dirt or whatever surface the horse is running on with the stuffed coconuts.

Bird Wings

Just like with horses, in order to achieve the sound of birds flapping their wings or taking off, foley artists normally resort to traditional and really orthodox accessories such as a vintage feather duster or gloves. It’s also important to experiment with different materials and perhaps heavier textiles to create a much thicker sound for larger species. An old feather duster can create a terrific effect if the foley artist can find a nice sounding one and hit it against all kinds of surfaces and objects to create different sounds.

HUMANS

Inhaling A Cigarette

smoking sound effect.jpeg

Ever wondered hoy films record the sound associated with a cigarette inhale? Foley artists often use saran wrap and other light materials to get this sound. By using saran wrap, you can get a similar sound to the fire sound mentioned above; however, it’s more subtle. Nonetheless, it is produced the same way as you would produce the fire sound: compress and then release, but make sure to do it controlled so you don’t overdo it. Make sure to have the mic close enough so you can capture the desired level of subtleness; otherwise, you may obtain a totally different sound.

*The images used on this post are taken from Pexels.com

An Introduction To Decibels

An Introduction To Decibels

What You Always Wanted To Know About Decibels

Many times in previous articles we’ve mentioned the word “decibel”. Of course, the world of sound and audio basically revolves around decibels. But what in reality does the concept of decibel entail? Here is our view on the decibels and how internalizing the concept can be useful if you either work as a sound designer, sound mixer, or even within the audiovisual industry. So, first things first: when it comes to trying to define decibels, there’s no better way than to put it this way: decibels are odd units, and there are at least three main reasons for such definition:

Decibels Are A Logarithmic Unit

When it comes to unveiling the intricacies of the definition of decibels, we first need to mention one of its aspects: a decibel is a logarithmic unit. Of course, our mind is not traditionally fond of logarithmic units, mostly due to the fact that we’ve become accustomed to deal with other types of units such as distances or weights, which are also present in our lives every day. Nonetheless, the concept of logarithmic units is highly useful, especially when we want to represent a sheer array of different figures or values.

If we were to take a value and make it 3, 4 or even 5 times bigger, we would see that the resulting figure would get incredibly huge on a logarithmic scale unlike on the traditional linear scale. Why? The reason behind this evident difference is that, while linear scales are based on multiplication, a logarithmic scale is based on exponentiation. Thus, if we were to increase the number 10 5 times, we would get to the value of 100,000. That indeed is really convenient whenever we want to get the full picture of a set of data ranging from dozens to even millions.

Some other units simply work fine on the regular linear scale, as we normally move within a rather small range of figures. That’s why it’s easy for us to measure the distance between cities; but what if we wanted to measure the distance between cities throughout the galaxy? (Of course, assuming we’re such an advanced civilization, that we managed to find life in other planets.) If we were to use a linear scale to represent the difference in distance between Los Angeles and Orion, the difference would be 1200000000000000 km, which is undeniably a really tough figure to look at; however, on a logarithmic scale, the difference would be just 16.8 log km.

The logarithmic scale offers a solution for this issue since it seamlessly provides an easy-to-understand figure while covering several order of magnitude. Like the cities used above as an example, some other natural phenomena can be expressed on a logarithmic scale, since they span through several orders of magnitude as well. Think of earthquakes, pH and, of course, sound and loudness. By using a logarithmic scale to measure and express some events, we can get a more accurate version of the models of nature.

Decibels Are A Comparative Unit

Once stated that decibels are a logarithmic unit, we have now a way to simply scale and measure different events, ranging from a simple whisper to a rocket take-off. Nevertheless, it’s not that simple. Every time we say something is 70dB, we are not making, in reality, a direct measurement —in fact, we are comparing two different values.

Decibels are the ratio between a specific measured value and a reference value. Simply put: decibels are a comparative unit. Stating that something is 30dB is as incomplete as saying that something is 30%. Thus, we need to specify the reference value we’re using, or, in other words, 20dB respect to what? What kind of reference value can we use then? And that’s what brings us to the third and last dimension.

Decibels Are A Versatile Unit

Given the fact that the vast majority of people associate decibels with sound, it’s clear that they cannot associate its measurement ratio with the value of any other physical property. These properties can be also associated with audio, like pressure or voltage, or may have little or even nothing to do with audio, like reflectivity. Decibels are found across all industries, not only audio. Take, for example, video, optics or electronics. So, after laying out all this information, what’s a decibel? A decibel is a logarithmically express ration between a pair of physical values.

audio mix console.jpeg

Screaming In Outer Space

No matter how much Star Wars tries to convince us of the possibility of actually conveying sound’s energy in outer space, reality dictates otherwise. Sound’s energy requires a physical medium to go and travel through. When sound waves disturb such mediums, there are actual measurable pressure alterations as the atoms end up moving back and forth —the louder the sound, the more intense the alteration is.

In Summary

A decibel is based on the logarithmic scale which, of course, works very well when displaying a large range of values. It is also a comparative unit that always uses the ratio between the measured value and the value used as a reference. Additionally, decibels can be used with any physical property aside from sound pressure. They also use reference values so the numbers being managed are more significant.

*The images used on this post are taken from Pexels.com