‘First Man’: The Moon, The Lion, and The Snake

‘First Man’: The Moon, The Lion, and The Snake

During this year’s Academy Award ceremony, the nominees for best sound editing were: ‘First Man’, ‘Bohemian Rhapsody’, ‘Black Panther’, ‘A Quiet Place’ and ‘Roma’. The award, as we already know, went to ‘Bohemian Rhapsody’; however, amongst the other nominees, there are a bunch of sound editing stories worthy of being told. Such is the case of Damien Chazelle’s First Man’the story of NASA's mission to land on the moon, focusing on Neil Armstrong and the years 1961-1969.

For their work on Chazelle’s La La Land, sound editors Mildred Morgan and Ai-Ling became the first female sound editing duo nominated for an Academy Award, and for their work on this year’s ‘First Man’ both were nominated again.

The sound played a major role in providing the film with its intimate feel. It delivers a window into Neil Armstrong’s family life, and a no less than an intimate sense of the perils associated with space travel. For both sound editors, the film gave them unique hurdles and challenges.

Telling a Story Through Sound

One of the most interesting things about this film, according to both sound editors, was that the director provided them at the very beginning with several animated sequences showing specific parts of the film so they could have an idea of what he had in mind in regards to sound. 

Those sequences included several key scenes from the film such as the opening scene with the X-15 and the Gemini 8 launch, which, if you watched the movie, were from the perspective of the astronauts, ignoring what was outside. Thus, the sound had to show the audience not only what they were watching, but also what they were supposed to feel. 

In order to achieve that both Ling and Morgan focused on building an arc where sound goes from using real and authentic sounds such as rocket launches and turbulences (which are recorded directly from simulator rides), to using non-related, even abstract, sounds like animal sounds such as lion roars and snake hisses that were subsequently pitched and manipulated to help develop the sense of danger and anxiety felt by the crew.

Space V.S. Earth

When it came to dialogue, Ai Ling and Mildred Morgan used a different approach for all earth-bound scenes. The film can be described as a quiet, intimate and even personal film. It is like if you were watching a film’s enthusiast first audiovisual project. It was definitely hard to match the texture with what the moving images are showing, giving it a rather unpolished feel.

Normally, when sound editors edit dialogue on an audiovisual project, their initial mission is to clean it up as much as they can so people just hear dialogue lines (and not artifacts or any other sound of some sort). In ‘First Man’, the director clearly wanted the earth scenes and all family scenes to sound like a documentary, which is why you can easily spot the difference.

The Apollo 11 Launch Scene

The sound editing team mentioned in several interviews that during pre-production, one of the things that were matter of discussion was the Apollo 11 launch. In fact, it is said that Neil Armstrong’s sons, who actually witnessed the launch back in the 60s, met several times with director Damien Chazelle and told him that they had not felt nor seen the sound of the rocket properly recreated in previous films, which ended up being a huge task for the sound editing team as they were responsible for coming up with something as close as the original launch.

Ling and Morgan went through all the NASA archives in hope of finding a recording of the launch, but because of how long ago the launch was the audio they came across with wasn’t as good as they expected. The team, then, reached out to SpaceX, and they were lucky enough to attend the launch of the Falcon Heavy. They were allowed to set several microphones directly on the launchpad, a quarter-mile and three miles away, to capture the different layers and characteristics of that sound at different distances.

During the sound editing process, the duo ended up mixing some of the real and actual crackles of the Apollo 11 launch from the NASA archives alongside with their recordings.

The moon.jpg

The Biggest Challenge: Neil Armstrong and Ryan Gosling

As per one interview, both Morgan and Ling asserted that their biggest challenge of ‘First Man’ was dealing with the astronaut's famous words upon landing on the moon: “That’s one small step for man, one giant leap for mankind.” The team was presented with Ryan Gosling’s performance of that line, and they were actually really similar to Armstrong’s; however, the film crew wanted them to be next to the point where they sounded like it could be the real Neil Armstrong in 1969.

The team spent hours working on it: making sure Gosling’s rhythm was exactly the same as Armstrong’s, blending in some of the original static, and pitching some of Gosling’s words and syllables so the could sound as close as the director wanted.

*The images used on this post are taken from Pexels.com

Sound Behind the Scenes: Interstellar

Sound Behind the Scenes: Interstellar

Interstellar is definitely one of Christopher Nolan’s most adventurous and creative pieces of work, and when it comes to sound, the approach for such an experimental film was not an exception.

During an interview with The Hollywood Reporter back in 2014, the director described how he prefers to approach sound for his films. Speaking in detail, Christopher Nolan decided to approach this area in a highly impressionistic way, which is definitely a quite unorthodox approach for a mainstream blockbuster such as Interstellar; however, 5 years after its debut, we can assert that it was the perfect approach for an experimental film.

His approach was creative and audacious, Nolan said. And if we were to take a further look into how the film’s sound was developed, we could assert that, compared to other filmmakers who have approached sound in a rather bold way, Nolan did a great job. 

space background.jpg

In previous articles we’ve mentioned the importance of sound when it comes to storytelling —many people, especially sound professionals and directors with a vast knowledge of sound distance themselves from the idea that you can only achieve clarity through dialogue, especially the clarity of emotions and stories. And that is a really important takeaway.

If directors really were to try to make the most out of sound and sound elements, they would end up trying to achieve that in a more holistic, almost layered, way, using all the different elements at their disposal: moving images and sound.

You will probably remember some viewers complaining about the movie’s sound after its premiere on Nov 5th in 2014, claiming that they were unable to properly hear some key dialogue lines, which led to a myriad of conversations about whether it was the fault of the sound mix or the sound systems in some theaters across the world where the film was played. Nolan took a step forward and addressed all of these questions directly —he said the movie’s sound was exactly as he initially envisioned it and even praised theaters for presenting it properly.

Aside from his tremendous work, Nolan is also renowned for being a passionate believer that sound is as important as the moving images, which is why he is fond of hearing how his projects sound in actual theaters. During the same interview, Nolan also said he traditionally visits up to seven different theaters across the world just to see how the movie’s sound is performing.

As mentioned earlier, Interstellar caused people to question whether the film’s sound was right. Essentially, when it comes to films like these, it is possible to mix sound in an unconventional way as they did. Of course, that can catch some individuals off guard, but people, in general, will appreciate the experience, which is what happened with Interstellar in the subsequent weeks after it premiered.

The Team Behind the Sounds

The movie’s sound was initially attributed to a very tight teamwork amongst German composer Hans Zimmer, mixers Gary Rizzo and Gregg Landaker and sound designer Richard King. According to the director himself, they made cautiously considered creative decisions —the movie is full of surprises sound-wise. In fact, there are several moments throughout the film where Nolan decided to use dialogue as a sound effect, which is why from time to time it is mixed slightly beneath the other sound effect tracks and sound effect elements to emphasize how loud the encompassing sound actually is.

As an example, if you recall the film, there’s this scene during which Matthew McConaughey is driving through the cornfield, which is also extremely loud and, to some extent, frightening —considering that Nolan himself was riding in the back of the car while filming point of view shots—. Nolan wanted the audience to experience first hand how chaotic such situation was by making them feel all the turbulence that was going on through sound.

Another example is when they are in the cockpit and you hear the creaking of the spacecraft. That’s actually a very scary sound, and it was loud enough for people to get immersed into the story and actually feel what space travel might be like. It was definitely all about emphasizing intimate elements. 

The movie is definitely a case study on its own. Nolan also described that sound designer, Richard King, managed to get high-quality sounds inside the truck during the scene mentioned above; however, he decided to echo them later in the film, with one of the key spacecraft scenes, in hopes of making it more similar and truthful to what astronauts experience and hear in real life.

space capsule.jpg

Nolan also resorter to other elements to describe the different planets the protagonists visit throughout the film, not just with moving images, but with sound as well. Nolan stayed away from the traditional layering of sound elements and chose to delineate the planes based on recognizable sounds —the water planet is a lot of splashing in contrast to the ice planet, which has the crunchy sound of ice glaciers.

*The images used on this post are taken from Pexels.com

How to Get High-Quality Sound on Every Budget

How to Get High-Quality Sound on Every Budget

Perhaps one of the most important ingredients of a good audiovisual project is the quality of the sound; however, this aspect always depends on another equally important aspect: the budget.

There’s a rather old saying in the audiovisual and film industry that goes like this: great sound quality can compensate for poor footage, but great footage will never save poor sound. There’s just something that immediately gets your attention and ruins the whole I-love-to-watch-films experience when you come across poorly produced audio. 

Think of The Blair Witch Project, for example. Daniel Myrick decided to use a cheap handheld camera to record all the scenes; however, you definitely felt engaged with the film and became immersed into it simply because of the sound. The high-quality sound. Regardless of whether you’re a savvy audio professional or an audio enthusiast, you’ve certainly come across several visually excellent projects, but the presence of low-quality sound has prevented you from fully engaging with the film.

If you’re someone trying to make your way up in the film industry, you have surely given some thought to the state of your finances. You may think that you don’t have enough budget for top quality sound-recording devices to record high-quality tracks, but in reality, you do. A good audio toolkit traditionally includes a shotgun mic, a portable audio recorder, and a lavalier mic; however, here are three options for all kinds of budgets.

The Beginner Budget (Or I have less than $100)

Let’s say you’re just starting out in the film industry, and you want to shoot a short film or a short documentary —but you don’t have any money (which is ok. Don’t worry). If that happens to be the case, then this is the right option for you. You can start by acquiring the following pieces of equipment:

Audio-Technica ATR-6550 Condenser Shotgun Microphone ( $60)

This shotgun mic is perhaps the cheapest you will ever find, but you will still get good quality sound out of it compared to a handheld camera audio. If you don’t have a boom attachment, you can buy a painter’s pole for less than US $20 and attach the mic to it.

Audio-Technica ATR3350iS Omnidirectional Condenser Lavalier Mic ($30)

If you’re shooting a short documentary, it is vital to have a lavalier mic to get consistent audio of the people you’re going to be interviewing. This mic is normally wired and connected to a smartphone for recording (yes, you can record audio with your smartphone.) In fact, you can use your own smartphone as your personal audio recorder. The lavalier mic can be plugged directly into the phone, and you can use your device’s native sound recording program to get all the tracks.

If your device doesn’t come with a built-in or a native sound-recording program, you can get Audio Recorder from the Play Store or the App Store. Nonetheless, remember this is the low-budget version of a basic sound recording toolkit so you will not be able to monitor audio whilst you’re recording it.

The Intermediate Budget (Or $350 is all I have)

For those with perhaps a bit more experience in this industry (and budget, of course) willing to get a little more serious about their filmmaking careers, the following options comprise a decent sound recording toolkit.

Zoom H4nSP 4-Channel Handy Recorder ($159)

This recorder is, in fact, quite handy. You can plug both conventional audio jacks and XLRs. Recording on a personal audio recorder is pivotal when running audio, especially since it comes with dials you can use to adjust levels on the fly —along with your headphones so you can monitor your tracks.

Rode VideoMic Shotgun Condenser Microphone With Boom Pole ($150)

If you happen to have been working in this industry for quite some time, then you know that RØDE is a key player within this line of work, and pretty much every filmmaker usually gets one of these as their first pro or semi-pro mic. You can find several sets online that include the microphone and also the boom, the cable and the adapter for something around $150.

sound capturing for a film.jpg

Fifine 20-Channel UHF Wireless Lavalier Lapel Microphone ($35)

This wireless lavalier kit will come in handy and will complement your audio recording kit just fine. It will also give your subject the freedom to walk around as you record their lines.

With the aforementioned pieces of equipment, you will be able to record your tracks with high-quality sound, which will allow the audio post-production process make the most out of them, making your project look and sound more professional. 

As mentioned in previous articles, taking care of audio tracks is perhaps the most crucial part during the sound post-production process, as audio and sound engineers will spend a lot of time refining and even saving those dialogue lines in order to make your project enticing for the audience.

*The images used on this post are taken from Pexels.com

How Sound Design Works: From Early Editing Stages to The Final Mix

How Sound Design Works: From Early Editing Stages to The Final Mix

Sound is actually half the picture, so it’s just as pivotal to pay attention to the sound editing phase whilst working on an audiovisual project.

One of the most important aspects a sound professional needs to focus on whilst editing a scene is trying to clean up the dialogue as much as possible by applying an EQ filter. As mentioned in other blog posts, there are many techniques that can help audio and sound professionals make the most out of the tracks they’re given during the post-production stage.

For example, if the tracks you’re working on happen to have a lot of hum in the background, you could tone down the low-shelf frequency and subsequently adjust the mid-shelf to turn the dialogue into a more crispy version of the one you received earlier. Additionally, use volume key-framing for achieving a more granular and detailed work and for reducing the number of sound distractions between one still and the next.

n short: the whole idea behind the sound post-production process is to create the perfect environment for the story that is being told through the moving images.

Once at least one sequence of moving images has been taken care of, begin to develop the storytelling by adding some atmospheric sounds and sound effects. Of course, this step highly depends on the nature of the story you’re working on, but normally the aforementioned advice can always be applied to some extent.

Atmospheric sound and sound effects will help you smooth out every cut, and also it can definitely improve timing by compensating for what seems to be lacking in the motion picture. At this stage, is normally a good idea to alternate between working with the picture crew editing the moving images and sound until they meet somewhere in the middle of what the director is expecting.

Pro tip: always pay special attention to the overall pace of the moving images. Remember sound and audio are a means to an end —creating, enhancing and improving the storytelling will achieve a seamless transition between scenes.

But how to create the perfect atmosphere and environment for a film? Many audio and sound professionals have an extensive and massive sound effects library, and they always experiment with these effects looking for the perfect atmosphere and the perfect environment with one single purpose in mind: create a world off the back of the moving images and the film.

Depending on the nature of every film, you can either create your own signature sounds or resort to your library; however, everything ultimately depends on what the director wants. If he or she wants a story that is set in an arid environment with no animals, trees, insects or other living species besides human beings, then as a sound professional you will need to create such atmosphere by including, for example, the sounds of different winds and sea waves to develop a compelling environment.

sound atmosphere.jpg

Think of the sea slowly getting louder and the wind gradually gaining speed. That, for example, could be used to add more intensity to a specific scene, and the audience will perceive it as credible and will be guided by their own emotions —which is ultimately the main goal.

Also, if the project includes a soundtrack, you need to be extra careful. Audio professionals traditionally wait until they have finished the cut in its entirety before incorporating music to it. Let’s take a much closer look: if a scene’s pace is adequate, then the music will fit in just right; however, you cannot compensate for inadequate pacing or simply try to create a new one from scratch simply by adding music tracks —either way, the result is far from being what the film requires.

Another key area of sound and sound design is sound effects. Normally, before having the final mix, sound designers use all the sound effects placed during the editing state as a reference guide.

*The images used on this post are taken from Pexels.com

A Practical Guide To Understanding The Audio Post-Production Process - Part 2

A Practical Guide To Understanding The Audio Post-Production Process - Part 2

It’s not rare for audio and sound professionals to get requests to see if they can either improve and or even repair less-than-ok production sound. Sometimes a track that is full of environmental noises and distortion can be improved and repaired to a certain extent so that the audio can be used in the post-production process.

That being said, that homemade footage that was shot next to the washing machine might be salvageable by applying noise reduction software.

Key Milestones For The Audio Post Production Process

1. Sending Video and Audio to the Sound House

The key part during this stage as an audio professional is understanding how the information and the data need to travel from one point to another. You should request your clients to provide you with the details on how you want to receive the project. Details traditionally include specs on how you want to receive video codec or where you like the timecode window burn to be placed.

Also, it is important to always request footsteps, lips, and subtitles. Audio normally gets to the sound house in the form of either an OMF or AAF export. The organization of the tracks and audio handle length is also a pivotal part of the export, so make sure to include these as well in your specs. Depending on what kind of export a client provides you with, there are some hurdles you’ll need to get past, so it is always a good idea to discuss these issues with your customers before they export their tracks.

Also, it is never a bad idea to suggest a test export before you receive the real thing. This will spare you endless nights working out drawbacks and bugs endlessly before the spotting session.

2. Spotting Session

The spotting session is when the entire team comes together to discuss the entire project. During this phase, editors normally focus on taking a myriad of notes whilst audio and sound professionals focus on the intricacies of sound design and client expectations, which kind of circles back to the budget issue we discussed in the first part of this guide.

The spotting session is all about making ideal suggestions. In fact, it is normal to see both the dialog editor and the picture editor discussing mic alternatives and whether or not certain tracks can be used. Of course, audio and sound post-production is all about the details, which is why this phase is also a good opportunity to discuss the details of the project, and decide whether that car honking needs to stay or say goodbye.

Normally, film producers and sound professional don’t seem to agree upon how they feel towards the vast majority of sounds, lines, and tracks, which is why the spotting session is so pivotal.

3. Editorial Work

Once the spotting session is over, audio and sound professionals take care of the project basically on their own. There is practically no room for anybody else in the making and designing of the audio. Normally editorial work requires help from the film crew (traditionally the producer or director) when the dialog edit contains words in a foreign language or when specific sounds need to be designed.

However, it is your responsibility as an audio professional to get past this “hurdles” —most audio pros sometimes will bring in an individual fluent in that specific language to make nothing is missing or altered during the final dialog edit. Additionally, whenever the project requires a more complex sound design, they’ll also bring in the crew and the director to discuss the complexity of that area in order to prevent it from sounding unrealistic in the final mix.

4. Premixing

Once the editorial work is concluded, the vast majority of sound and audio professionals are fond of performing a premix. The premixing process allows you as an audio pro to perform extra work that will end up enhancing the project’s final mix, making the audio sound spotless.

pre mixing for audio post production.jpg

Matching the sound of the audio across all edits, using a sheer array of different mics is something that needs to be done in addition to dialing in the loudness of all audio tracks. It is also normal to use the premixing stage to perform audio restoration and noise reduction.

5. The Final Mix

The final mix is when directors and film producers start to hear the film as it should sound. It is important to mention, however, that there might be different approaches to the whole sound production process just like no two people cook a steak the same way. While there may be similarities in what this job is about, the inventiveness and approaches to creating the best sound for an audiovisual project vary from audio professional to audio professional.

*The images used on this post are taken from Pexels.com

8 Tips To Record High-Quality Video Voiceovers

8 Tips To Record High-Quality Video Voiceovers

Any video project needs to look professional. The vast majority of videos and films have the potential to impress audiences —what producers and directors need to do is match the moving images with some amazing audio to add the finishing touch. Any video needs both elements, audio, and good quality moving images, to work properly. If either is lacking, then the project is going to come across as an amateur.

Having said that, here are 8 tips for capturing and recording incredible video voice overs:

It’s All About The Location

And by location we mean: always strive to find the right spot. If you’re just getting started and you can’t afford a professional studio, then you will need to find the best spot in your home to record. Read your lines aloud in each room and listen carefully to find out which room seems to be more suitable. Listen out for any issues that can be easily fixed, such as reverb or dead sound.

The Popcorn And Seashell Test

Try to stay away from plosive and sibilance —these are fancy words for the traditional popping and hissing noises that you frequently hear during recordings with words that begin with either ‘S’ or ‘P’ in them. It’s, therefore, a good idea to invest in a pop filter. A pop filter is a shield that sits between the speaker and your mic, and you can find a broad range of them with prices between $10 - $25.

However, if you’re still considering whether you really need a pop filter, then try the popcorn and seashell test. Try listening to a recording of yourself saying words that begin with the aforementioned letters to see if you can hear any hissing or popping.

Get A Stand

Regardless of whether you’re close or far away from your script (physically, of course), it’s likely that you will be required to turn the page whilst recording your voiceover, damaging your recording with the noise of the paper rustling.

Invest in a book stand to rest your scrip and hold it still. Also, consider the number of breaks in your script, print it off in a way so you can display each section without having to either touch the paper or turn the page.

voiceover and post-production.jpg

Listen Carefully

Recording your lines ‘deaf’ and hoping to correct and fix any possible issues during the editing stage is certainly a bulletproof recipe for disaster. A good pair of headphones or monitors are just as pivotal as a high-quality microphone. You’ll pick every detail that might have missed if you had proceeded the other way around. You will be able to monitor the quality much more closely, and it’s likely going to be much easier to retake the recording than to edit the imperfections at a later stage.

Don’t Force It

Although your script may read beautifully on the pages you’ve written, it could be a totally different thing once you read it aloud. Strive to keep it concise and easy to pronounce. Then, make sure that you practice reading it aloud a few times before giving it to the performers. That way you can certainly spare yourself a more troublesome audio post-production phase.

The Power Of Apples

It’s been said that in order to get the best vocals from either you or your speaker, you will need to have your mouth slightly wet. However, if you are constantly taking short sips of water, then you will probably spend more time in the bathroom than in front of the microphone. A great way to counteract this is to keep a tart apple at hand. A bite will definitely clear the decks and the vocals will sound much clearer and neater.

Don’t Quite Just Like That

Have you ever heard the song Don’t Stop Believing by Journey? Simply because your voiceover performer is not delivering the performance you expected and has turned themselves into a fit of giggles, or simply because the mic is not getting the audio you envisioned, that doesn’t mean that the whole project is bound to fail. Simply take a break, analyze the good things you’ve done and try to establish whether or not you were following a different methodology to get those good results and apply it back to the tricky parts. Nobody said it would be easy.

And Don’t Forget To Save

The worst thing that could happen in this particular industry and those alike, once you’ve worked so hard on your project, creating the nuances, recording different lines, etc., would be to have to start everything again from scratch… simply because you didn’t save your progress. Always make sure hit the save button as you progress. In fact, analyze whether the size of your project requires an external hard drive so you don’t run out of space in the middle of your work.

*The images used on this post are taken from Pexels.com

A Practical Guide To Understanding The Audio Post-Production Process - Part 1

A Practical Guide To Understanding The Audio Post-Production Process - Part 1

When talking about audio post-production, one cannot avoid one simple question: what’s the typical process most audio professionals go through when working either with a sound house or by themselves? In fact: is there a predetermined process? Everyone seems to work a little differently; however, there are certainly common steps and some common ground that you, as an audio enthusiast or audio professional, will encounter when working in an audiovisual project.

Finding Your Location

Today, any film’s soundtrack can be get done and mixed in all sorts of places and different facilities. In fact, would it be too much to say that you could even have someone do the whole sound work if you were the owner of a simple sound facility with a fistful of rooms and editors? The short answer would be: it depends.

There are many creative people working in this industry who started off using their own bedroom as their main editing and mix room. If the project you’re currently working on doesn’t require a surround sound mix and is simply going to be broadcasted and heard online, then almost everyone with a basic setup and some experience would be able to take care of your needs.

Likewise, if your documentary is about the Second World War and you’re going to hear bullets passing over your head and missiles blowing up war fields, then you’re going to need a much larger and professional room.

The process of how to find the right sound facility isn’t different from finding people to work with. In this industry, aside from the final pieces of work, which you can use to judge whether someone has skills, word-of-mouth seems to be the best source of candidates. If you’re seeking to gather a team together, ask around. If you remember an audiovisual project where the sound was definitely special, find out who was in charge.

When looking for sound professionals or an audio post-production studio, it is key to find a studio or group of people well known for their creative capabilities. Some might even say that there isn’t too much thought behind the sounds that are required for a film: “If you see a car, make sure I hear the engine once in a while.”

Working with a professional studio and a creative team will take your project to the next level, perhaps questioning the need for the engine sound, as described above, knowing that not hearing it will create another layer within the storytelling. It’s the job of the audio professional and mixer to present you with ideas and different approaches when appropriate. It should be part of a seamless chain of events and discussions and never an argument or a struggle.

In the long term, as a producer or executive, if you want an engine sound you should get your engine sound, but your audio professional might ask you what kind of emotion you’re looking for. Your first contact with a studio should lead to a conversation about the storytelling, the story itself, style, and sound needs for your project. The studio should also look at a fine cut of your film. That way they will be able to anticipate how much work is going to be required to do edit all the sounds, create the sound effects, foley, etc.

This part of the process is mostly known for raising artistic and practical questions that will certainly help the studio, the team, and you envision a much clearer version of the final cut. For instance, do you want to hear the sound effects under silent footage? Your main audio professional should provide you with a sense of the quality of your production sound in order to determine where the budget should go.

Making The Most Out Of Your Budget

Budget, that famous and no less crucial word. As audio professionals, we always try to find out how much the filmmaking team has set aside for both sound prep and mix. This is merely done in hopes of developing a plan that fits your needs as a filmmaker. It the budget isn’t close to what a traditional job would cost, taking into account the project’s needs and running time, then it’s best to state that from the very beginning.

Budget for sound design

Money is, of course, an important factor to ponder and consider, which is why knowing beforehand how much the filmmaking team is able to invest is crucial for saving time. If, as a filmmaker, you’d rather not say what your budget is, normally a studio can quote a wide price range and explain the scope of each version of such a quote. Once the budget issues have been taken care of, the next step is to establish and get the project into the audio post pipeline.

*The images used on this post are taken from Pexels.com

The Importance Of Mastering

The Importance Of Mastering

The whole idea behind the mastering stage throughout the whole audio and sound post-production process is to make audio sound the best it can across all platforms. Music, just to cite an example, has never been used or, rather, consumed across more platforms, formats, and devices than as of the last decade.

In fact, even if you’re an audio enthusiast recording and mixing at a million dollar studio, or working with different soundtracks at your very own home studio, you will always need the final quality seal of approval of the mastering stage. Thus, your resulting sound will be heard the way you, or your client, first envisioned it. A well carried out mastering job makes sound and audio consistent and balance. Without this pivotal part of the audio post-production process, individual tracks might feel disjointed in relation to each other.

The Difference Between Mastering And Mixing

Although mixing and mastering do share certain similarities, techniques, and tools, both processes are often mistaken and confused, as they are, in reality, different. The mixing process traditionally refers to what in the audio and sound post-production industry is called a “multi-track recording”, whereas mastering is the final touch; it refers to that final polish audio professionals apply to the whole mix. Let’s take a close look:

Mixing

Mixing, as mentioned above, is all about getting all tracks and audio elements to work with each other. If we were talking about mixing a record, the mixing part would be getting individual instruments and voices to work as a song. It’s, essentially, making sure everything is in place.

Once the audio professional deems they have a good mix, it should then easily flow into the mastering process.

Mastering

Now that we have given the mixing process its proper context, think of mastering as the final touch. In fact, there’s no better analogy than to think about both mixing and mastering as a car: mixing would mean getting all the parts working together, and mastering would mean getting the best car wash ever. You certainly want your new car to look as shiny, slick and cool as possible.

Mastering takes a closer look at everything in the mix and makes it sound as it is supposed to sound.

To provide a bit of history on the mastering process, it is worth mentioning that in 1948, for example, the first mastering engineers were born amidst the birth of the magnetic tape recorder. Before this, there was practically no master copy, as records used to be recorded directly on 10-inch vinyl.

In 1957, the stereo vinyl came out. Mastering engineers started to think of ways to make all records sound a bit louder. At that time, loudness was an essential factor for better radio playback and, of course, much higher record sales. This marked the birth of the well-known loudness wars that still go on in actuality.

Fast forward 30 years. In 1982 the compact disc brought a total revolution for the mastering process. Vinyl masters were totally replaced by the digital era, although many of those analog tools remained the same. Nonetheless, that finally changed in 1989 when the first digital audio workstation (DAW) and the first mastering software appeared, offering a high-end, no less than mind-blowing, alternative to the mastering process.

How Is Mastering Carried Out?

Mastering as a sub-phase of the audio and sound post-production process has its own complexity. Here are some of the most traditional techniques involved:

Audio Restoration

First, a mastering engineer or audio professional fixes any possible alterations in the original mix, like unwanted noises, clicks, pops or hiccups. It also helps to correct small mistakes or alterations that can be noticed when the un-mastered mix is amplified.

Stereo Enhancement

This technique is responsible for dealing with the spatial balance (left to right) of the audio being mastered. When done right, stereo enhancements allows audio professionals to widen the mix, which ultimately allows it to sound bigger and better. The stereo enhancement also helps to tighten the center image by focusing the low-end.

stereo enhancement.jpg

Equalization (EQ)

Equalization or EQing takes care of all spectral imbalances and improves all those elements that are intended to stand out once the mix is amplified. An ideal master is, of course, well-balanced and proportional. This means that no specific frequency range is going to be left sticking out. A well-balanced piece of audio is supposed to sound right and good on any platform or system.

Compression

Compression allows audio professionals and mastering engineers to correct and improve the dynamic range of the mix, keeping louder signals in check while making quieter parts stand out a little bit more. This allows the mix to reach the required level of uniformity.

Loudness

The last stage in the whole mastering process is normally using a special type of compressor called a limiter. This allows audio professionals to set appropriate overall loudness and create a peak ceiling, avoiding any possible clipping that otherwise would lead to distortion.

*The images used on this post are taken from Pexels.com

5 Pro Tools Mixing Tips Ideal For Audio Post Production

5 Pro Tools Mixing Tips Ideal For Audio Post Production

Pro Tools is definitely regarded as one of the most coveted tools for audio post-production processes. Regardless of whether you work for a studio or you’re an audio enthusiast, you’ve certainly come across this software. But, do you know how to use it to its fullest extent? Here are 5 go-to tips for a more professional mix.

Mixing audio for films, television and ads is a rather misconceived process, even for audio professionals with a high-level or experience and understanding. There are many ways of approaching a mix to optimize the workflow and save time, get better results, or even leave your personal mark on an audiovisual project.

1. Use track groups to attain a reverb mix for all dialogues and effects tracks

Mixing reverbs is often highly time-consuming if done one track at a time, especially when everything a scene needs is a minor tweak or a bit of reverb for all dialogues and effects. Thus, more specific alterations and changes can be carried out track by track depending on what the scene and the project needs.

2. Use pink noise to fill out the low end in all background sound effects

Many audio professionals seem to agree that lots of background sound effects don’t possess a cinematic low end, which is essential to provide a specific scene with the life it needs. By adding pink noise you will be able to use all background effects that possess the desired high and mid frequency content, but might be not as good in terms of low frequencies.

By opening up an Audiosuite signal generator in Pro Tools, you can select the Pink Noise waveform. Afterward, make a selection of the same length as the scene you’re working on, on a stereo audio track. Make sure you click on ‘render’ to create the desired pink noise audio file, and apply clip effects with a low pass filter and a heavy boos of equalization.

3. Create distance between elements in the mix

The vast majority of dialogues and sound effects sometimes feel too close to each other in the mix. Of course, by adding reverb you can aid this situation a bit; however, often such distance remains whilst the size of the space is therefore augmented. By following this method you can change things for the good: add an EQ-1 band to an insert on the track you’re going to be working with, or, otherwise, use clip effects. Afterward, make sure you set a high shelf and drop it down with automation every time a particular track requires more distance. Finally, use reverb freely.

4. Adopt a multi-step process to lower noise in dialogue tracks without affecting the original recording

Noise reduction will always be a pivotal part during every audio post-production process. In fact, it still remains a contentious theme in the whole industry. The vast majority of audio and sound professionals don’t really know the extent to which it affects the dialogue portion of the signal, but that is often because it has been used and, perhaps, misunderstood, heavily.

Thankfully, by adopting a multi-step process you can end up having with well-restored dialogue tracks and a much higher signal to noise ratio without the common unpleasant consequences of doing it the other way around.

Start by inserting a high pass filter with a soft slope in order to reduce low-frequency noises. Then, place a gate/expander on the desired dialogue track whilst ensuring a rather low ratio and low threshold slightly below the dialogue level. This should drop the noise down between dialogue lines a little bit. Apply a very subtle live noise reduction, and layer in a mono room tone into the scene.

dual microphones.jpg

5. Blend foley and ADR into the mix

As mentioned in other articles, Foley and ADR are known for being difficult to blend well into a scene. A much better quality recording set, using similar microphones, and the actual ADR/Foley job can definitely provide this whole situation with the aid it needs to achieve much better quality.

Thankfully, there are other steps you, as an audio professional or enthusiast, can take in order to step the transitions up to a notch between looped lines and original lines, or foley with existing dialogue tracks.

If the scene was shot in a reverberant space, it is very possible that you will need to apply a mono impulse response reverb on every ADR and foley tracks. Try to find impulses that resonate with your locations as close as possible.

If both ADR and Foley still feel a bit detached in the general mix, experiment with the Pro Tools Lo-Fi to soften it a bit more. The idea is to achieve a natural movement and flow in all tracks. Make sure you take care of any ADR or foley peaks before softening it a bit more with the plugins.

*The images used on this post are taken from Pexels.com

Creating Signature Sounds

Creating Signature Sounds

Creating and designing signature sounds is a skill that is definitely pivotal for setting yourself up as an audio professional. But how do high-quality professional sound designers actually come up with this kind of sounds? It all starts with the nature of the project they have to work on.

The Project

There are many projects out there that require a sheer array of signature sounds. Ranging from animated series to videogames, signature sounds are as essential as any other sound element that makes it into the final cut. Audio professionals are normally brought into the process at a very early stage, especially during animated series, more specifically during the first animatic. For those who are not familiar with the term ‘animatic’, an animatic is a crucial element within the animation industry; is basically a video of storyboard panels timed to work in sync with the dialogue.

Adding sound elements and sound design to an animatic can serve a number of purposes: it can provide the animatic with the life it needs for the animation studio to better understand how to animate crucial moments. It also can provide executives with a much deeper and better understanding of the action when going through the whole animatic for final approval. And last but not least it can definitely set signature elements early enough so that sound elements can shed some light on the process the animators must follow.

The Process

First: Brainstorm About Aesthetics

Depending on the nature of the project, as an audio professional you need to first identify what are those key elements that are present in the series or the audiovisual project. The idea, of course, is to come up with a way to incorporate sound elements in a way that these are nuanced and special.

When it comes to designing sounds for a project, it’s also important to consider the audience, as it would also help to determine what kind of soundscape the project needs. This step is crucial, for the audience needs to be familiar with the sounds they will be hearing, otherwise, storytelling might be affected. If the project is geared towards a much younger audience, for example, then creating sounds that are familiar to kids or preschoolers would be more suitable.

When it comes to crafting signature sounds, the set locations normally look way different than traditional sets —they look really high tech, which makes the process always entertaining. Since signature sounds are often used in audiovisual projects that rely on the same technology (animated series, video games, etc.), incorporating sound aspects always depend on creating new sci-fi sounds for all the elements that are present in the project. This, of course, boosts creativity as there are many challenges and hurdles to overcome.

Second: Separate Stand-Out Signature Sounds From The Rest

Although many would say that every sound that appears in projects like these is definitely a signature sound, there are several nuances to that assertion. Some also might think that it is definitely a waste of time to come up with a special sound for traditional sounds such as a door being opened or hand grabs. However, creating a whole new soundscape from scratch for all reusable elements can ensure not only stand-out sound design elements but also a solid signature aesthetic for the entire project.

mixing audio.jpg

Many audio professionals are fond of this idea, and they normally decide that the sounds for all the things in the main set should be signature: hand grabs, opening doors, furniture, mechanical elements, etc.

Third: Come Up With A Custom Recording List

When coming up with a custom recording list, it is always a good idea to brainstorm elements to record which might provide the overall aesthetic with more support. It is advisable to map out what to record for each signature element that will be present in the project, and for that, it’s advisable to think about the general aesthetic that needs to be achieved. What items help achieve the soundscape the project needs?

Focus on all the elements the project demands and try to create a checklist of what will be needed. And for that we mean: go outside (literally). Some audio professionals pay stores a visit looking for interesting items that will help them create the sounds they need. Test every element out, paying special attention to how it sounds. In animation, for example, the best recordings are normally made from items which are different from what the audience sees on the screen.

Audio professionals, regardless of whether they work on crafting signature sounds or not, always focus on one single thing: achieving a certain texture and a certain sound. They don’t simply want to just record the exact same element that will be shown in the moving images and on screen. Foley plays a vital role in this part of the process. When choosing items to record, it’s highly important to shut off the visual part of the process, as the brain always tells to go for the obvious.

*The images used on this post are taken from Pexels.com

How Warner Brothers ended up establishing the sound for the film industry

How Warner Brothers ended up establishing the sound for the film industry

The sound industry was established after no less than a curious chain of events. Back in 1919 three German inventors, Josef Engl, Joseph Massole, and Hans Vogt, patented the tri-ergon process. A process capable of transforming audio waves into electricity. It was initially used to imprint those waves into films strips that, when played back, a light would shine through the audio strip, converting the light back into electricity and then into sound.

The real issue in all this, however, was the amplification of the sound. which would be tackled by an American inventor who played a pivotal role in the development of radio broadcast, Dr. Lee de Forest. In 1906, de Forest invented and subsequently patented a device called the audion tube, —an electronic device capable of taking a small signal and amplifying it. The audion tube was a key piece of technology for radio broadcast and long-distance telephones.

In 1919, de Forest’s started to pay special attention to motion pictures. He realized his audion tube could help films attain a much better degree of amplification. Three years later, specifically in 1922, de Forest took a gamble and designed his own system. He then opened up the ‘De Forest Phonofilm Company’ to produce a series of short sound films in New York City. The impact of his technology was well received, and by the middle of 1924, 34 theaters in the American East Coast had been wired for his sound system.

The fact that a considerable amount of theatres in the East Coast had acquired De Forest system didn’t pick the interest of Hollywood. He had indeed offered the technology to industry leaders like Carl Laemmle of Universal Pictures and Adolf Zukor of Paramount PIctures; however, they initially saw no reason to complicate the solid and profitable film business by adding other features as frivolous as sound. But one studio took a gamble: Warner Brothers.

Vitaphone

Vitaphone was a sound-on-disk technology created and patented by Western Electric and Bell Telephone Labs they used a series of 33 and ⅓ rpm disks. When company officials attempted to get Hollywood’s attention in 1925, they faced the same attitude of disinterest that de Forest had, except for one slightly minor studio: Warner Brothers Pictures.

Courtesy of  Richie Diesterheft  at Flickr.com

Courtesy of Richie Diesterheft at Flickr.com

In April of 1926 Warner Brothers. decided to establish the Vitaphone Corporation with the financial aid of Goldman Sachs, leasing the disk technology from Western Electric for the sum of US $800,000. In the beginning, they wanted to sub-lease it to other studios in hopes of expanding the business.

The studio, Warner Brothers. never imagined this technology as a tool to produce and create talking pictures. Instead, they saw it as a tool synchronize musical scores for their own films. In order to showcase their new acquisition and the feature they had managed to add to their films, Warner Brothers launched a massive US $3,000,000 premiere in the Warner’s Theatre in New York City on August 6, 1926.

The feature film of this premiere was ‘Don Juan’. An amazing musical score performed by the New York Philharmonic accompanied the film, and the whole project was an outstanding success; some critics even went on to praise it as the eighth wonder of the world, which ultimately led the studio to project the film in several American major cities.

However, and despite the tremendous success, industry moguls weren’t too sure about spending money on developing the sound for the film industry. The entire economic structure of the film industry would necessarily have to be altered in order for it to adopt sound —new sound studios would have to be built, new expensive recording equipment would have to be installed, theatres would have to be wired for sound, and a standard sound system process would have to be defined.

Additionally, foreign sales would suffer a drastic drop. At that time, silent films were easily sold overseas. Dialogues, however, was a different story. Dubbing a foreign language was still conceived as a project that would take place in the near future. If studios were to adopt sound, it would also affect musicians who found employment in the movie theatres, as they would have to be laid off. For all these reasons Hollywood basically hoped that sound would be a simple passing novelty, but five major studios decided to take action.

MGM, Paramount, Universal, and Producers Distributing Corporation signed an agreement called The Big Five Agreement. They all agreed to adopt and develop a single sound system if one of the several attempts that were taking place alongside the Vitaphone should come to fruition. Meanwhile, Warner Brothers didn’t halt on their Vitaphone investments.

Courtesy of  Kathy Kimpel  at Flickr.com

Courtesy of Kathy Kimpel at Flickr.com

They announced that all of their 1927 pictures would be recorded and produced with a synchronized musical score. Finally, in April 1927, they built the first sound studio in the world. In May, production would begin on a film that would cement sound’s place in cinema: The Jazz Singer.

Originally ‘The Jazz Singer’ was supposed to be a silent film with a synchronized Vitaphone musical score, but the protagonist, Al Jolson, improvised some lines halfway into the movie. Lines that were recorded and could be heard by the audience. Warner Brothers. liked it and let them in. The impact of having spoken lines, however, was enormous —it marked the birth of what we know today as the sound for the film industry.

An Introduction to Automated Dialogue Replacement (ADR)

An Introduction to Automated Dialogue Replacement (ADR)

When talking about audio sound post-production, we cannot simply forget about automated dialogue replacement, or ADR —the process of re recording dialogue in a studio to replace the dialogue lines that were recorded on set during the production of a film or an audiovisual project.

This can be done for a number of different reasons. First, there may have been a technical problem with the location audio, for example, an airplane flew overhead during the best take, or maybe an actor wasn’t really on access with the mic during another take.

In other cases, ADR is used to replace an actor’s vocal performance, which is especially done in musicals where a professional singer replaces an actor’s voice. Like when Marni Nixon would supply her singing voice to double over Marilyn Monroe, Audrey Hepburn, Deborah Kerr, and Natalie Wood.

Additionally, you may also have to ADR a scene to replace some of the words used in an audiovisual project to make a more television-friendly cut out of it. An example of this is Snakes on a Plane (2006), starring Samuel L. Jackson, where some of his lines were changed altogether, removing all of his swearwords, so that it could fit TV standards.

And sometimes ADR is used for creative purposes. Marlon Brando said once that he mumbled his way through his lines in The Godfather (1972) in order to force producers to ADR his scenes, although that process was known as looping at that time. During the ADR sessions, he was able to truly craft his performance based on the context of every scene and every situation.

In the world of low-budget and independent filmmaking, ADR is traditionally seen as some sort of boogeyman —something to be avoided at all costs. But it shouldn’t necessarily be. In fact, post-production sound, if carried out with purpose, can actually be a crucial tool for the low-budget filmmaker.

ADR in the Historical Context

At the beginning of the sound era, around the 1930s, there was no technology for recording sound separately from the moving images in a visual film. There was no way of dubbing audio, the sound effects, or even the music. When the studios started to transform into master sounds, the brought in hundreds of radio broadcast and telephone engineers, many whom never had shot a film in their lives.

As sound started to become this ‘new thing’, these new engineers started to get more involved in the shooting process, however, their participation altered the stylistic advances gained in silent films in the late 1920s.

Selling sound then became a whole new line of business. Given the impossibility for film producers to have soundtracks separately from the moving images, Paramount took over Joinville Studios in France for the specific purpose of taking the same script of a film and then remaking it up to 12 or 13 times in different languages. They would keep the same sets, props, and costumes and then rotate the actors for each different language version of the script; however, it didn’t work out so well, and Paramount gave up on the idea of multi-language films.

paramount building.jpg

By 1932, and after having given up on the idea of remaking scripts using different actors in hopes of achieving versions of a film in different languages, Paramount kept the technology of dubbing, just when the technology of post-synchronization was just around the corner. By 1935 the position of supervising dubbing engineer was about the same rank as the film editor. By the late ’30s, most of the audio in a studio film was actually done in the post-production process.

This freed up directors from the confines of producing audio, allowing them to focus on the intricate art form that we enjoy today. When dialogue replacement was first introduced, each line had to be re-recorded using a loop of film that would play over and over again, often called looping. Modern technologies use computers to loop a specific section of a film so that actors can deliver the best of their performance.

ADR in Practice (For the Independent Filmmaker)

If you talk to most independent filmmakers they would all agree that ADR is, essentially, something evil. And yes, perhaps, if you’re on a tight budget, having one more unexpected expense, especially because of sloppy locations, is, by all means, a bad thing; however, there are several tricks that allow filmmakers to harness, to some extent, the benefits of ADR.

Using the same mics, the same mic placement, and recreating the environmental conditions of a scene in their digital audio workstations, enables audio professionals to use partial ADR, which ultimately allows them to achieve a much better take of a particular scene that maybe wasn’t as good as it should have been given a number of aspects the production was not able to control during the filming process.

*The images used on this post are taken from Pexels.com

Oscar for Best Sound Mixing and Editing Explained

Oscar for Best Sound Mixing and Editing Explained

In this article, we’re going to be looking at perhaps two of the most confusing Oscars categories: Sound Mixing and Sound Editing. If you’re not familiar with the sound and audio post-production landscape, these categories might seem exactly the same thing; however, there are certain differences, and that’s why we often see a movie nominated for both.

The big thing to think about what’s sound editing and sound mixing is that sound editing refers to the recording of all audio except for music. And what’s audio without music? Dialogues between characters, the sound picked up in whatever scenario a scene was recorded at, and, also, sound recorded in the studio, for example, ADR, extra lines of dialogue, all those crazy sound created to mimic, for example, animals, vehicles, environmental noises, the foley, etc.

Sound mixing, on the other hand, is balancing all the sound in the film or the movie. Imagine taking all of the music, all of the audio, all of the dialogue lines, all the sound effects, the sounds going around, etc., and combining them together so they are perceived as balanced and beautiful tracks.

Some people refer to this last category as an ‘audio tiramisu’, as there are layers and layers of sound that, in the end, compose a beautiful orchestrated group of sounds. Layers of what’s happening in a film’s particular scene and the real realm and layers of what’s happening around it, like in the spiritual realm.

If you recall The Revenant, the American semi-biographical epic western film directed by Alejandro G. Iñárritu that was nominated for several Academy Awards categories including both sound editing and sound mixing, the exemplification of the film’s sound being a total ‘audio tiramisu’ is more noticeable. In the revenant, the sound was so perfectly crafted that it was like if two different stories were taking place at the same time side by side, and you could only distinguish between them by listening.

When it comes to sound editing, take for example another movie, Mad Max: Fury Road, the 2015 post-apocalyptic action film co-written, produced, and directed by George Miller. The movie contains all of these amazing and great recordings of cars, fire, explosions, the really subtle dialogue, which ultimately creates so much contrast between the action and what the characters were really saying. Max, played by Tom Hardy, was actually really quiet, whereas Imperator Furiosa, played by Charlize Theron, was screaming at the top of her lungs, and all of that happened in the middle of the most frenetic action possible. All the audio was used and mixed at the same time.

Having used and mixed the audio at the same time was, in reality, a huge achievement. Rumor has it they used up to 2,000 different channels, meaning they used 2,000 different audio pieces at one time, which is perfectly recognizable at the opening car chase sequence, allowing you to perceive how much sound was being used. The movie, in the end, managed to mix all the dialogue, the quiet dialogue, the effects, the action, the environmental sounds, etc., and to use it all together.

The Process Deconstructed

The relationship between sound mixing, sound mixing and storytelling, however, is perhaps the cornerstone of the whole audio post-production process. How audio design and sound mixing can be used to help storytelling, specifically in the films, is the main question that audio technicians strive to answer.

movie making.jpg

First, they approach both practices thinking how they can make the tracks sound better, and then how they can add to the story —make the audio tell the story, even if you don’t specifically see what’s going on. In terms of sound design, the whole idea behind this creative process is coming up with key takeaways regarding what is the purpose of the scene, or whether or not there are specific things that don’t appear in the moving images but still are ‘there’ and need to be told.

After having analyzed the scenes in terms of what can be done to improve the general storytelling, audio technicians start to balance the dialogues track by track, which is, of course, a process that takes several hours. Is it necessary to add the room tone? Is it necessary to remove it? Those type of questions normally arise during this part of the process. Afterward, the EQ part starts.

The EQ is normally that part of the process where audio technicians do a little bit of clean up by changing the frequencies of the sounds the audience will hear in order for them to hear them clearer and better. This is important in terms of the storytelling because by using an equalizer, audio technicians can add textures to the voice and the sounds people will hear, which is of course what the whole storytelling is about.

*The images used on this post are taken from Pexels.com


Oscar For Best Sound Editing: ‘Bohemian Rhapsody’ - How Freddie Mercury’s Voice Was Achieved

Oscar For Best Sound Editing: ‘Bohemian Rhapsody’ - How Freddie Mercury’s Voice Was Achieved

‘Bohemian Rhapsody’ was crowned with Best Sound Editing one week ago during the last installment of the Academy Awards. Regardless of whether that was real life or just fantasy, the truth is the film took home several Oscars figurines, including Best Sound Mixing as well.

When it comes to the film itself, actor Rami Malek delivered a really compelling performance as Queen frontman Freddie Mercury, especially when the character was singing —and that’s because the singing, for the most part, was actually “performed” by the real Freddie Mercury. Additional singing, especially for the non-live concert scenes, was performed by singer Marc Martel.

Those two voices together were responsible for Malek’s convincing portrayal of Mercury. Putting them together and then making them come out of Malek’s mouth required more than simply a well-carried out editing process. According to editor Nina Hartstone, the editing for only the singing parts almost took open heart surgery-style editing.

In order for the production to recreate the concerts in the film, supervising sound and music editor, John Warhurst, had to do the unimaginable to capture Queen crowds stomping and clapping in unison for the iconic ‘We Will Rock You” whilst singing along with the band’s Live Aid famous set, which is essentially the cornerstone of the storytelling in the film.

The duo, Hartstone, and Warhurst, recently discussed in an interview how they were able to craft the incredible and massive Live Aid set, and the lengths they had to go in order to recreate the recording session at the Rockfield Farm. They also talked about capturing the vast majority of custom sounds, like the ones we hear during live presentations, concert crowds, and extra material.

When asked about whether or not Malek participated in the singing parts, Warhurst stated clearly that the vast majority of the singing is, of course, Freddie Mercury, as it seemed right to use his own voice to keep his spirit in the film. He also said that, when production was first putting together the script, ever change also demanded a change in the vocals, which ultimately forced the sound editing crew to come up with ideas on how they could edit them. Every time they had a version of Freddie Mercury in a multi-track format (with the vocals in separate tracks) Warhurst and Hartstone would use them.

During the filming process, Harstone asserted, in order for them to get it to look like Malek was actually performing the songs like Freddie Mercury does, the sound editing crew had to tell the actor that he would need to sing the material on set with loads of energy. Malek was responsible for giving more than he was capable of when it came to singing, which would have been fine if they had invested the same amount of time the band did during the Live Aid set (20 minutes); however, that particular scene demanded production and the sound editing crew a staggering two weeks to get it right. Malek had to sing at the top of his singing capabilities for almost fifteen days, take after take.

From the Live Aid takes, the sound editing crew got a lot of Malek’s breaths and his movement sound. Those were put together with Mercury’s vocals and combined the two, which, in the end, yielded Nina Hartstone an Oscars nomination and the prize, as she was solely responsible for that. Harstone said several times that she wanted to achieve the highest degree of realism. Aside from wanting it to look that Malek was actually singing, Hartstone went on to use a lot of the actor’s breaths whilst he was performing on set and also from subsequent material the duo did with him in between takes.

Bohemian Rhapsody Sound Editing.jpg

Breaths, efforts, lip sounds, and other tiny sounds were combined with both Martel and Mercury’s vocals, tying them into the final picture, finally making it look like it was actually Malek singing. Such efforts and display of resources allowed the movie to stand out amongst the other films in this category.

Throughout the film, there are plenty of scenes, aside from the Live Aid, that also required the best of Malek, Warhurst, and Hartstone. For those scenes where there simply aren’t a recording of Freddie Mercury, like when he’s singing Happy Birthday or Love Of My Life, the sound editing crew had to surgically mix three different voices: Mercury, Martel, and Malek.

In order for them to mix the voices, the duo described this process as open heart-style editing. Both Hartstone and Warhurst had to go deep into the waveforms to get the transitions to work, paying special attention to detail. As for the tools used, it’s been said that iZotope RX played a vital role helping them get the EQs to match; however, even before using software and bringing other tools into it, the sound editing crew had to get the whole editing to feel right.

*The images used on this post are taken from Fox Movies Gallery

The Sound of An Oscar Nominee: A Star Is Born

The Sound of An Oscar Nominee: A Star Is Born

Have you ever wondered what it takes to craft a compelling sound? What techniques and technologies behind sound have been used for sound professionals to hit the spotlight and be recognized by the industry? Now that The Oscars are around the corner, a lot of conversations start to arise, especially about the nominees.

In this installment, we’re gonna go through the sound of A Star Is Born, as the movie has been nominated for best sound mixing. Steve Morrow, who later offered some behind-the-scenes insights at recording Lady Gaga and Bradley Cooper, was responsible alongside Tom Ozanich, Dean Zupancic, Jason Ruder for this part of the audio post-production process.

In a recent interview, sound mixer Steve Morrow said that both Gaga and Cooper wanted the film to have a particular style of sound: they wanted it to sound as if it was a live concert, which makes sense given Morrow’s experience in shooting at live concert venues like the Glastonbury festival; however, the request really ended up posing a real challenge: “In Glastonbury, we all went in there believing we had almost eight minutes to shoot, but we later found out the festival was actually running late so they only gave us like three minutes,” Morrow said.

The sound mixing crew asserted later on that the idea was to film three songs, but given those circumstances, they decided to play 30 seconds of each of those songs. As for the sound mixing process, Morrow also mentioned that the idea at the very beginning of the process was to capture all sounds live, all the performances, all the singing, etc., which ultimately ended up in a Lady Gaga mini show, as the music wasn’t amplified in the recording room.

Such conditions led Morrow to assert that his role on A Star Is Born differed a bit from a more typical production. On a normal set, it is the production’s responsibility to record lines of dialogue while filming all environmental or sound effects that would be happening at the same time during the filming process. During A Star Is Born, Morrow and the rest of the sound mixing crew had to do all that process whilst also recording the band and the live singing, making sure they had captured all the tracks.

After that, the team would hand those tracks to the editorial and the post-production crew. Sound people would then take all that information, mix it down accordingly, and that’s practically what you hear in the film. Nothing else.

As for the tricky part of the film, filming the live concert, Morrow took a rather uncanny approach to get those tracks. In the movie, the sound crew had to film twice at a real concert: Stagecoach and Glastonbury. The crew had to take advantage of the time between acts, and as soon as Willie Nelson was expecting his curtain call to come on stage, Morrow and the crew make the most out of the eight minutes they initially had to get the tracks.

Image from http://www.astarisbornmovie.net/#/Gallery/

Image from http://www.astarisbornmovie.net/#/Gallery/

What they would do, according to the mixing crew, which was ultimately different from all the other recordings they carried out in controlled spaces, is they would approach the monitor guy with some equipment and take a feed from the monitor through the mic Bradley Cooper was supposed to use.

Most of the time, they would do a playback of the band through the wedge —the small speakers a performer standing in front in live presentations. Morrow and the rest of the mixing crew would then put those playback tracks through so that Bradley Cooper could hear them, but the crowd couldn’t as they were standing far enough away from those speakers. So, in a nutshell, what they did to record the live concert scenes was to have Bradley Cooper singing live whilst hearing a playback of the instruments through the wedges.

An additional challenge was making sure not to amplify any of those tracks and performances, as Warner Bros. didn’t want the music to be heard by the crow in order not to risk losing impact. Such demands forced the mixing crew to mute practically everything as much as they could, which was also different from the way film producers film in different and controlled locations.

The fact of having a big crown in front makes the process way more challenging: the whole crew, film, picture, sound, etc., only have a few minutes to shoot, which increases the chances of not getting a lean and clean sound. In controlled scenarios, sound crew normally record up to ten different tracks, whereas in front of a live audience, they would need not only to prevent tracks from being heard but also to record the live audience for the desired effect.

Dialogue Editing and ADR With Gwen Whittle

Dialogue Editing and ADR With Gwen Whittle

If you recall the movies Tron Legacy and Avatar, they both, aside from having received Oscar nominations, have one name in common: Gwen Whittle. Gwen is perhaps one of the top supervising sound editors working today, which is why a lot can be learned from her work.

Gwen also did the sound supervision for both Tomorrowland (starring George Clooney and Hugh Laurie) and Jurassic World (starring Chris Pratt), and although she’s known for overseeing the whole sound editing process, she’s mentioned in several interviews that she’s highly fond of paying special attention to both dialogue editing and ADR sessions, as mentioned in previous articles by Enhanced Media in our blog.

Dialogue editing, as mentioned by George Lucas back in 1999 just before Star Wars: Episode 1 hit the theaters, is a crucial part of the whole sound editing landscape, and, apparently, even within this industry, nobody pays enough attention to it. In fact: dialogue editing is the most important part of the process.

So, what’s dialogue editing?

Dialogue editing, if it’s done really well, is, according to Gwen Whittle, unnoticeable —it’s completely invisible, it should not take you out of the movie, and you should pay no attention to it. Imagine taking all the sound from the set, take by take, just to take a much closer look at the dialogues captured for a specific scene.

Of course, not all dialogues recorded on the set sound the same —maybe the take was great, the acting was great, the light was great, but suddenly a truck was pulling over and an airplane happened to fly over the crew. It’s practically impossible to recreate that take as there are many aspects involved: air changes, foreign sounds, etc., and no matter how much you try to remove all those background noises, sometimes you need to resort to the ADR stage. In an ADR session, it all comes down to trying to recreate the same conditions that should apply to that particular scene.

Cutting dialogue often poses several challenges to sound editors, and it highly depends a lot on the picture department. A dialogue editor receives all the production from the picture department, everything that was originally shot on set, making sure that each mic has its own track. It’s the responsibility of the picture department to isolate each mic with its own track so dialogue editors can do their magic.

On set, the production sound mixer is recording anywhere from one microphone up to eight, usually, sometimes more, but the idea is for each actor to have their own mic and at least one or two booms. All this mix is passed onto the dialogue editing crew, isolating each track, matching the moving images just like the movie is supposed to be.

Once the dialogue editing crew has received the tracks, they listen to them and assess which parts can be used and which parts need to be recreated, organizing which tracks will make it to the next stage. Sometimes, since dialogues can be recorded using two different microphones such as the boom and the talent’s personal mic, sound editors can play with both tracks trying to make the most out of it whilst spotting which parts require an additional ADR session.

If there’s a noticeable sound, like a beep, behind someone’s voice, a dialogue editor can really get rid of that in case they need to; however, that’s not always the case. ADR sessions are quite familiar with the sound editing process. In films with a smaller budget, the dialogue process gets a bit trickier, since normally all tracks aren’t passed isolated onto the dialogue editing crew, so they need to tackle any hurdle in their tracks. Low budget films normally include more dialogue as they don’t have the resources to either afford fancy sets or include fancy visual and sound effects.

Do directors hate ADR?

Well, according to Gwen Whittle, not many directors are fond of ADR. David Fincher, for example, is. ADR is a tool. A powerful tool. And if you’re not afraid to use it, you can really elevate your film because it takes away the things that are distracting you from what’s going on.

ADR and dialogues.jpeg

Actors and actresses like Meryl Streep love ADR sessions because is another chance to perform what they just did on set. They see ADR as the opportunity go in there and try to put a different color to it, and it’s another way to approach what the picture crew just got on a couple of takes on set. Many things can be fixed, and even alter several lines. You can add a different twist to something. In fact, even by adding a breath to something, you can change the nature of a performance. It’s the opportunity for both the talent and directors to hear what they really want to hear.

*The images used on this post are taken from Pexels.com

4 Services That Allow Audio Post-Production Collaboration Seamless

4 Services That Allow Audio Post-Production Collaboration Seamless

Collaboration is not foreign when it comes to audio post-production. In fact, it is what gives studios constructive feedback, ideas, solutions and different perspectives to work on altogether, helping all parties involved produce better pieces of work.

Audio, sound, and video collaboration happens all the time. When it comes to audio and sound, for instance, it has never been so plausible to write a song with another individual on the other side of the world or to hire a full orchestra or session musicians to record music for the score and original soundtrack purposes.

In this post, we address some services and other software that make the whole collaboration workflow much easier, but more importantly, productive.

The Audio Hunt

The Audio Hunt is best known for being an online collaboration platform where hundreds of studio owners and audio professionals make their gear available for other colleagues to run their tracks through. How does it work? Imagine you want to run your mix through a specific piece of equipment/software. You will then be required to, first, open a account, find the piece of hardware you want to use, start a chat with the vendor, book the job depending on the fare (fares and fees vary depending on what type of hardware/software you want to use), and, finally, wait for the service to be completed so you can download the files.

Pro Tools Cloud Collaboration

Not long ago, Avid introduced Cloud Collaboration for Pro Tools in the Pro Tool 12.5 version. This allows Pro Tools users to share parts of projects, or the whole project if necessary, with other Pro Tools users around the globe without even having to close the application. It’s a rather fancy system that seamlessly integrates between different Pro Tools versions.

audio post production.jpeg

Pro Tools Cloud Collaboration gets rid of the traditional audio post-production collaboration process that involved exporting files out of the application followed by sharing them on different cloud services for other collaborators and editors to receive. Now, the 12.5 and above allows editors to collaborate with other Pro Tools users in a much quicker and simpler way.

Source Elements Source-Connect

In case you’re wondering what is Source-Connect, Source-Connect is what replaced the ISDN. Conceived as an industry-standard replacement, Source-Connect comes with a solid set of features for remote audio and sound recording and monitoring, allowing audio and sound professionals to undertake several aspects common in the audio post-production industry such as overdub, ADR and voice-over, regardless of whether the origin of these files took place anywhere in the world, over a decent internet connection integrated to their digital audio workstations.

Source-Connect works as an application, and it does not require complex digital audio workstations setups. It allows audio and sound professionals to work directly in the DAW of their preference, which ultimately allows them to harness the full set of features the application comes with.

Besides, Source-Connect comes with a built-in Pro Tools support, which is also compatible digital audio workstations that almost exclusively support VST plug-ins, including, but not limited to, Cubase, Nuendo, Pyramix, etc.

Audiomovers LISTENTO

Listento allows users to move low latency audio files from Digital Audio Workstations (DAW) to browse through the use of plug-ins. Imagine having a client who cannot physically visit your studio to listen and give you their insights on the final mix you’ve developed. By using Listento to play the mix directly from your workstation master track to the client’s browser, you eliminate such complication.

Listento seems to be still under development. One of the things the software is working on is the future implementation of a built-in chat to communicate with your client, allowing you to move away from third-party app messengers such as Skype or Google Hangouts to discuss the intricacies of your mix with the other individual.

Listento includes several transmission formats, such as:

  • PCM 16Bit

  • PCM 32Bit

  • AAC 128Kb

  • AAC 192Kb,

  • AAC 256Kb (MacOS only)

  • AAC 320Kb (MacOS only)

Additionally, Listento is a free plug-in; however, in order for sound professionals and audio editors to use it, they will be required to subscribe to Audiomovers in order for them to stream audio files directly from their digital audio workstations. Lucky enough, Audiomovers subscription tiers are quite affordable:

  • Weekly: $3.99

  • Monthly: $9.99

  • Yearly: $99.99

When sharing your files, sign up to your Audiomovers account to both send and receive the live stream. Send your client a link like if you were sharing with them a Google Sheets download link. And in case you’re still wondering whether you should pay one Audiomovers tier of service, the software comes with a one-week free trial.

A final word on collaboration: the fourth industrial revolution has come indeed with many pieces of software and hardware that has made possible to collaborate between professionals and studios. It is nonetheless as important to always nurture the collaborative spirit by being willing to work alongside other professionals in a specific workflow. This, of course, demands a more proactive and receptive attitude towards collaboration, otherwise, by not consider other perspectives, the chances of developing and learning something new are lower.

*The images used on this post are taken from Pexels.com

Sound For Documentary

Sound For Documentary

Since the emergence of the sheer array of affordable camera recorders, the rising prevalence of mobile phones with decent video cameras and the ubiquity of social media channels such as YouTube as one of today’s major media diffusion channels, it has never been this easy to produce and subsequently sharing documentary videos. If we were to take a much closer look at the whole production process, it would be easy to assert that sound is the weakest part of many of these videos. Although it is relatively easy to shoot and record with a camera regardless of its quality, the art of placing a microphone, monitoring and taking care of volume levels still remains an ambiguous puzzle compared to the other components that take place when shooting a video documentary.

In today’s post, we going to go through a general outline of practical techniques and an end-to-end guide to the primary tools for recording, editing and mixing sound for documentary audiovisual projects. Whether you are using a mobile phone, a regular video camera, a D-SLR, prosumer or a professional camcorder for shooting your project, the sound will always be an important part of the storytelling.

There are many ways in which tremendously good results can be achieved with consumer gear in many different circumstances; nonetheless, professional gear comes with extra possibilities. Here are some fundamental concepts directors and documentary producers need to bear in mind every time they want to take one of these projects.

Sound, as a conveyor of emotions - Picture, as a conveyor of information

Documentary shooting.jpeg

Think of the scene in Psycho of a woman taking a shower in silence. Now add the famous dissonant violin notes, and you get a whole new experience. That leads to consider the emotional impact of a project, in this case of a scene in particular. Sound conveys the emotional aspects of your documentary. It’s practically the soul of the picture. Paying special attention to sound, both during shooting and afterward in the studio, can make the real difference. No matter if you’re planning on doing a simple interview with plenty of dialogue, an enhanced, or rich sounding, in this case, the human voice is the differentiating factor between an amateur and professional project.

Microphone placement and noise management are key

The main issue with the vast majority of amateur sound recordings is the excessive presence of ambient and environmental noises from all kinds of sources, and a low sound level relative to the ambient noise. As a result, we’ve all seen how difficult it is to understand the dialogues, which is ultimately detrimental to the intended emotional impact. This common situation is one of the consequences of poor microphone placement. Directors and producers need to learn to listen to the recording and experiment with different microphones and different placement options. It all boils down to getting the microphone as close as practical to the intended sound, and as far away as possible from the extra noise that interacts in a negative way with the whole recording.

Additionally, if the documentary takes place outdoors, the chances of getting unwanted wind noise are hight, which is why the use of a windjammer to control wind noise is always a good idea. Regardless of whether you’re a professional or an amateur taking on a documentary audiovisual project, with a little bit of practice and research, you can craft outstanding sound recordings, irrespective of whether you’re recording with professional gear or your mobile phone.

Monitor your recording

In order to craft a compelling and professional recording, you need to properly set recording levels first —not too soft so sound doesn’t get lost in the overall noise; not too loud so you can avoid possible distortion. When recording, always monitor the sound you’re getting with professional headphones in order to avoid possible surprises in the edition. When using digital recording devices, it’s impossible to record anything beyond full scale, so abstain yourself from crossing this limit, as otherwise, the recording will sound hideous, unless your camera or the device you’re recording with as an automatic gain control to adjust recording levels.

The shotgun myth

There seems to be a myth regarding microphones. Apparently, some people firmly believe that the shotgun microphone reaches farther than other devices. This is not true. Shotgun microphone simply does not work like a telephoto lens. Sound, unlike light, travels in all directions. Of course, shotgun microphones work; they have their place, and they really come in handy in somewhat noisy environments, especially when you cannot be as close as the individual doing the talking as you’d like in an ideal scenario. That being said, shotgun microphones are far from performing magic. What they really do is that they respond to sound differently in terms of reduced level, null point, and coloration. Although they look impressive, plenty of sound professionals and directors choose to use different types of microphones for their documentary project.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 3

Mixing Audio For Beginners - Part 3

Here is the third installment of Mixing Audio For Beginners. If you’ve been following this illuminating compilation of the intricacies and the basics of sound and audio post-production, we’re gonna be addressing further topics taking it from where we left off in the last post about Mixing. Otherwise, we suggest you start off right from the very beginning. So, without further ado, let’s continue.

Ambiance

We mentioned last time that when editing dialogues in a studio through ADR, it is no less than pivotal to create the right environment for recording new lines. Every time a sound professional is tasked with re-recording lines and additional dialogue in a studio, they always have to pay special attention to several aspects that, if overlooked, could ruin the pace of the scene. Each dialogue edit inevitably comes with several challenges, like the gaps in the background environmental sound.

There’s nothing more unpleasant than listening to audio or a soundtrack where the background ambiance doesn’t match the action going on from one scene to the other. This phenomenon is highly common during ADR sessions, which is why, aside from helping the talent match the intensity each shot requires, sound professionals also need to edit the background sounds to fill any possible hole in order for the scene to feel homogenous.

The problem is when the production sound crew captures room tone on a specific location and then, once production is finished, the audio post-production crew needs to replace dialogue and fill the holes with room tone. Of course, there are tools to recreate room tones based on noise samples taken from existing dialogue recordings; however, it is indeed one of the most common tasks under the umbrella of audio post-production.

Sound Effects (SFX)

sound effects.jpeg

Whether coming across the perfect train collision sound in a library, creating dog footsteps on a Foley session, using synthesizers to craft a compelling spaceship pursuit, or just getting outside with the proper gear to record the sounds of nature, a sound effects session is the perfect opportunity for sound and audio professionals to get creative.

Sound effects libraries are a great source for small, and even low-budget, audiovisual projects; however, you definitely must not use them in professional films. Some sounds are simply too recognizable, like the dolphin sound every single time a movie, ad or TV show, shows a dolphin. Major film and TV productions use teams to craft and create their own idea of sound effects, which ultimately becomes as important as the music itself, for example. Think about the lightsaber sounds in any Star Wars movie.

After that, additional sounds can be created during a Foley session. Foley, as discussed in other articles, is the art of generating and crafting sounds in a special room full of, well, junk. This incredible assortment of materials allows foley artists to generate all kinds of sounds such as slamming doors, footsteps in different types of surface, breaking glass, water splashes, etc. Moreover, foley artists recreate these sounds in real time, which is why it is normal to have several takes of the same sound in order to find the one that best fits the scene —they are shown the action in a large screen, and then start using the materials they have at hand in order to provide the action with realistic sounds. Need the sound of an arm breaking? Twist some celery. Walking in the desert? Use your fists and a bowl of corn starch.

Music

Just like with sound effects libraries, when it comes to music, sound professionals have two choices based on their talks with production —they can either use a royalty-free music library, or they can, alongside music composers, create a score for the film entirely from scratch. Be that is it may, the director and productions are the ones who have the final say over what type of music they want to use in the project and, perhaps more importantly, where and when music is present throughout the moving images.

Sometimes video editors resort to creating music edits to make a scene more compelling. Other times, it’s up to sound professionals to make sure the music truly fits into the beat and goes in accordance with what is happening. The trick is to make the accents coincide with the pace of the on-screen moving images as the director instructed, and that music starts and ends where and when it’s supposed to.

Mixing

Assembling all the elements mentioned in the first two parts of this mini guide and this article into a DAW timeline and balancing each track and different group of sounds into a homogeneous soundtrack is perhaps where this fine art reaches its pinnacle. Depending on the size of the studio, it is possible to use more than one workstation and different teams working together simultaneously to balance the sheer array of sounds they’ve got to put in place.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 2

Mixing Audio For Beginners - Part 2

According to the previous article, we mentioned the importance of establishing an intelligent workflow in your audio production process. As per defined by the dictionary, the word workflow means “the sequence of processes through which a piece of work passes from its initial phase to total completion.” Such definition, of course, can be integrated with the audio post-production workflow phases in order to see how they work in different types of productions.

Pre-Production

A pre-production reunion is the meeting that gets you together with the production officials, whether it is the production company, director, or the advertising agency before the production starts. If you happen to be invited to this meeting, you can, of course, express your opinions to the production team, which might even save them hours and effort. If they seem to be open to receiving additional creative input, you could help develop the soundtrack at the concept phase. It means that your insights on the project can also have a certain impact on selecting the audio budget, which is always a positive thing. Remember: an hour of proper pre-production will spare you 10 hours of possible setbacks.

Production

Makeup artists make their magic, services are consumed, lights are turned on, actors deliver their best performance, video is shot, audio is recorded, computers are then used to animate existing action sequences, etc., and the pretty much the whole budget is spent during this phase.

Video Editing

Once the visuals have been recorded and created, the director works with the video editor in charge to pick the best footage and assemble the moving images in a way that tells a compelling story. Once the editing has been done, the audio editor or sound engineer will receive a finished version of the audiovisual project that, in theory, will not suffer further changes —that’s known as “picture lock.” This final version of the recorded footage can only be achieved once the deadlines have been met and the budget for those processes spent.

Creating The Audio Session - Importing Data

The video editor is responsible for passing onto audio professionals an AAF or an OMF export compiling all the audio edits and additional media so they can re-create, or create from scratch, their own audio edits. Once sound editors and audio professionals import the files, they will have a much clearer idea of what they’ve got to do.

At this point, audio editors also import the moving images and the edited video, making sure they are in sync with the audio from the aforementioned exports (AAF and OMF).

Spotting

During this phase, both the director or the producer sit down with audio professionals to tell them exactly what they want and, more importantly, where they want it. The entire film or video project is played, so audio professionals can take notes regarding the dialogues, the sound effects, the score, and the music, etc.

Dialogue

Dialogue is perhaps the most important part of the entire soundtrack. Experienced audio editors will always separate dialogue edits into different tracks, one per each actor. Sometimes, when audio is recorded on location, the audio person responsible for recording those tracks often records two different tracks for each actor —a clip-on mic and the boom mic. Once in the studio, the audio professional assesses both tracks and chooses the one that sounds best and is more consistent throughout the entire length of the moving images.

In case of coming across noise on the dialogue tracks, a common technique that sound editors employ is using noise reduction tools or similar software to repair that audio without compromising the final mix.

ADR

We’ve covered ADR before in previous posts, just in case you don’t know what ADR means.

Shooting film and ADR.jpeg

If, after having used the techniques mentioned in the last paragraph, the audio cannot be repaired through the use of noise reduction software, audio professionals always resort to performing ADR.

ADR means having the actors and the talent go to the studio to carry out several tasks, such as:

  • Replace missing audio lines

  • Replace dialogue that couldn’t be saved

  • Provide additional dialogue in case of further plot edits.

Actors have projected their scenes so they can recreate their lines. Normally, a cue is used to make sure they record in sync with what’s going on in the film. They also do four or five takes in a row, since the scenes are projected in a loop over and over (hence the word looping). The sound editor or audio professional then picks the best line and the best performance and replaces the original noisy/damaged take with the newer version. In order to match the intended ambiance, sound editors may use the same mich as the original take, but they will likely have to use further equalization, compression, and reverb to make the new performance be in synch with the timbre.

*The images used on this post are taken from Pexels.com