Viewing entries tagged
Sound Design

How Warner Brothers ended up establishing the sound for the film industry

How Warner Brothers ended up establishing the sound for the film industry

The sound industry was established after no less than a curious chain of events. Back in 1919 three German inventors, Josef Engl, Joseph Massole, and Hans Vogt, patented the tri-ergon process. A process capable of transforming audio waves into electricity. It was initially used to imprint those waves into films strips that, when played back, a light would shine through the audio strip, converting the light back into electricity and then into sound.

The real issue in all this, however, was the amplification of the sound. which would be tackled by an American inventor who played a pivotal role in the development of radio broadcast, Dr. Lee de Forest. In 1906, de Forest invented and subsequently patented a device called the audion tube, —an electronic device capable of taking a small signal and amplifying it. The audion tube was a key piece of technology for radio broadcast and long-distance telephones.

In 1919, de Forest’s started to pay special attention to motion pictures. He realized his audion tube could help films attain a much better degree of amplification. Three years later, specifically in 1922, de Forest took a gamble and designed his own system. He then opened up the ‘De Forest Phonofilm Company’ to produce a series of short sound films in New York City. The impact of his technology was well received, and by the middle of 1924, 34 theaters in the American East Coast had been wired for his sound system.

The fact that a considerable amount of theatres in the East Coast had acquired De Forest system didn’t pick the interest of Hollywood. He had indeed offered the technology to industry leaders like Carl Laemmle of Universal Pictures and Adolf Zukor of Paramount PIctures; however, they initially saw no reason to complicate the solid and profitable film business by adding other features as frivolous as sound. But one studio took a gamble: Warner Brothers.

Vitaphone

Vitaphone was a sound-on-disk technology created and patented by Western Electric and Bell Telephone Labs they used a series of 33 and ⅓ rpm disks. When company officials attempted to get Hollywood’s attention in 1925, they faced the same attitude of disinterest that de Forest had, except for one slightly minor studio: Warner Brothers Pictures.

Courtesy of  Richie Diesterheft  at Flickr.com

Courtesy of Richie Diesterheft at Flickr.com

In April of 1926 Warner Brothers. decided to establish the Vitaphone Corporation with the financial aid of Goldman Sachs, leasing the disk technology from Western Electric for the sum of US $800,000. In the beginning, they wanted to sub-lease it to other studios in hopes of expanding the business.

The studio, Warner Brothers. never imagined this technology as a tool to produce and create talking pictures. Instead, they saw it as a tool synchronize musical scores for their own films. In order to showcase their new acquisition and the feature they had managed to add to their films, Warner Brothers launched a massive US $3,000,000 premiere in the Warner’s Theatre in New York City on August 6, 1926.

The feature film of this premiere was ‘Don Juan’. An amazing musical score performed by the New York Philharmonic accompanied the film, and the whole project was an outstanding success; some critics even went on to praise it as the eighth wonder of the world, which ultimately led the studio to project the film in several American major cities.

However, and despite the tremendous success, industry moguls weren’t too sure about spending money on developing the sound for the film industry. The entire economic structure of the film industry would necessarily have to be altered in order for it to adopt sound —new sound studios would have to be built, new expensive recording equipment would have to be installed, theatres would have to be wired for sound, and a standard sound system process would have to be defined.

Additionally, foreign sales would suffer a drastic drop. At that time, silent films were easily sold overseas. Dialogues, however, was a different story. Dubbing a foreign language was still conceived as a project that would take place in the near future. If studios were to adopt sound, it would also affect musicians who found employment in the movie theatres, as they would have to be laid off. For all these reasons Hollywood basically hoped that sound would be a simple passing novelty, but five major studios decided to take action.

MGM, Paramount, Universal, and Producers Distributing Corporation signed an agreement called The Big Five Agreement. They all agreed to adopt and develop a single sound system if one of the several attempts that were taking place alongside the Vitaphone should come to fruition. Meanwhile, Warner Brothers didn’t halt on their Vitaphone investments.

Courtesy of  Kathy Kimpel  at Flickr.com

Courtesy of Kathy Kimpel at Flickr.com

They announced that all of their 1927 pictures would be recorded and produced with a synchronized musical score. Finally, in April 1927, they built the first sound studio in the world. In May, production would begin on a film that would cement sound’s place in cinema: The Jazz Singer.

Originally ‘The Jazz Singer’ was supposed to be a silent film with a synchronized Vitaphone musical score, but the protagonist, Al Jolson, improvised some lines halfway into the movie. Lines that were recorded and could be heard by the audience. Warner Brothers. liked it and let them in. The impact of having spoken lines, however, was enormous —it marked the birth of what we know today as the sound for the film industry.

Oscar for Best Sound Mixing and Editing Explained

Oscar for Best Sound Mixing and Editing Explained

In this article, we’re going to be looking at perhaps two of the most confusing Oscars categories: Sound Mixing and Sound Editing. If you’re not familiar with the sound and audio post-production landscape, these categories might seem exactly the same thing; however, there are certain differences, and that’s why we often see a movie nominated for both.

The big thing to think about what’s sound editing and sound mixing is that sound editing refers to the recording of all audio except for music. And what’s audio without music? Dialogues between characters, the sound picked up in whatever scenario a scene was recorded at, and, also, sound recorded in the studio, for example, ADR, extra lines of dialogue, all those crazy sound created to mimic, for example, animals, vehicles, environmental noises, the foley, etc.

Sound mixing, on the other hand, is balancing all the sound in the film or the movie. Imagine taking all of the music, all of the audio, all of the dialogue lines, all the sound effects, the sounds going around, etc., and combining them together so they are perceived as balanced and beautiful tracks.

Some people refer to this last category as an ‘audio tiramisu’, as there are layers and layers of sound that, in the end, compose a beautiful orchestrated group of sounds. Layers of what’s happening in a film’s particular scene and the real realm and layers of what’s happening around it, like in the spiritual realm.

If you recall The Revenant, the American semi-biographical epic western film directed by Alejandro G. Iñárritu that was nominated for several Academy Awards categories including both sound editing and sound mixing, the exemplification of the film’s sound being a total ‘audio tiramisu’ is more noticeable. In the revenant, the sound was so perfectly crafted that it was like if two different stories were taking place at the same time side by side, and you could only distinguish between them by listening.

When it comes to sound editing, take for example another movie, Mad Max: Fury Road, the 2015 post-apocalyptic action film co-written, produced, and directed by George Miller. The movie contains all of these amazing and great recordings of cars, fire, explosions, the really subtle dialogue, which ultimately creates so much contrast between the action and what the characters were really saying. Max, played by Tom Hardy, was actually really quiet, whereas Imperator Furiosa, played by Charlize Theron, was screaming at the top of her lungs, and all of that happened in the middle of the most frenetic action possible. All the audio was used and mixed at the same time.

Having used and mixed the audio at the same time was, in reality, a huge achievement. Rumor has it they used up to 2,000 different channels, meaning they used 2,000 different audio pieces at one time, which is perfectly recognizable at the opening car chase sequence, allowing you to perceive how much sound was being used. The movie, in the end, managed to mix all the dialogue, the quiet dialogue, the effects, the action, the environmental sounds, etc., and to use it all together.

The Process Deconstructed

The relationship between sound mixing, sound mixing and storytelling, however, is perhaps the cornerstone of the whole audio post-production process. How audio design and sound mixing can be used to help storytelling, specifically in the films, is the main question that audio technicians strive to answer.

movie making.jpg

First, they approach both practices thinking how they can make the tracks sound better, and then how they can add to the story —make the audio tell the story, even if you don’t specifically see what’s going on. In terms of sound design, the whole idea behind this creative process is coming up with key takeaways regarding what is the purpose of the scene, or whether or not there are specific things that don’t appear in the moving images but still are ‘there’ and need to be told.

After having analyzed the scenes in terms of what can be done to improve the general storytelling, audio technicians start to balance the dialogues track by track, which is, of course, a process that takes several hours. Is it necessary to add the room tone? Is it necessary to remove it? Those type of questions normally arise during this part of the process. Afterward, the EQ part starts.

The EQ is normally that part of the process where audio technicians do a little bit of clean up by changing the frequencies of the sounds the audience will hear in order for them to hear them clearer and better. This is important in terms of the storytelling because by using an equalizer, audio technicians can add textures to the voice and the sounds people will hear, which is of course what the whole storytelling is about.

*The images used on this post are taken from Pexels.com


The Sound of An Oscar Nominee: A Star Is Born

The Sound of An Oscar Nominee: A Star Is Born

Have you ever wondered what it takes to craft a compelling sound? What techniques and technologies behind sound have been used for sound professionals to hit the spotlight and be recognized by the industry? Now that The Oscars are around the corner, a lot of conversations start to arise, especially about the nominees.

In this installment, we’re gonna go through the sound of A Star Is Born, as the movie has been nominated for best sound mixing. Steve Morrow, who later offered some behind-the-scenes insights at recording Lady Gaga and Bradley Cooper, was responsible alongside Tom Ozanich, Dean Zupancic, Jason Ruder for this part of the audio post-production process.

In a recent interview, sound mixer Steve Morrow said that both Gaga and Cooper wanted the film to have a particular style of sound: they wanted it to sound as if it was a live concert, which makes sense given Morrow’s experience in shooting at live concert venues like the Glastonbury festival; however, the request really ended up posing a real challenge: “In Glastonbury, we all went in there believing we had almost eight minutes to shoot, but we later found out the festival was actually running late so they only gave us like three minutes,” Morrow said.

The sound mixing crew asserted later on that the idea was to film three songs, but given those circumstances, they decided to play 30 seconds of each of those songs. As for the sound mixing process, Morrow also mentioned that the idea at the very beginning of the process was to capture all sounds live, all the performances, all the singing, etc., which ultimately ended up in a Lady Gaga mini show, as the music wasn’t amplified in the recording room.

Such conditions led Morrow to assert that his role on A Star Is Born differed a bit from a more typical production. On a normal set, it is the production’s responsibility to record lines of dialogue while filming all environmental or sound effects that would be happening at the same time during the filming process. During A Star Is Born, Morrow and the rest of the sound mixing crew had to do all that process whilst also recording the band and the live singing, making sure they had captured all the tracks.

After that, the team would hand those tracks to the editorial and the post-production crew. Sound people would then take all that information, mix it down accordingly, and that’s practically what you hear in the film. Nothing else.

As for the tricky part of the film, filming the live concert, Morrow took a rather uncanny approach to get those tracks. In the movie, the sound crew had to film twice at a real concert: Stagecoach and Glastonbury. The crew had to take advantage of the time between acts, and as soon as Willie Nelson was expecting his curtain call to come on stage, Morrow and the crew make the most out of the eight minutes they initially had to get the tracks.

Image from http://www.astarisbornmovie.net/#/Gallery/

Image from http://www.astarisbornmovie.net/#/Gallery/

What they would do, according to the mixing crew, which was ultimately different from all the other recordings they carried out in controlled spaces, is they would approach the monitor guy with some equipment and take a feed from the monitor through the mic Bradley Cooper was supposed to use.

Most of the time, they would do a playback of the band through the wedge —the small speakers a performer standing in front in live presentations. Morrow and the rest of the mixing crew would then put those playback tracks through so that Bradley Cooper could hear them, but the crowd couldn’t as they were standing far enough away from those speakers. So, in a nutshell, what they did to record the live concert scenes was to have Bradley Cooper singing live whilst hearing a playback of the instruments through the wedges.

An additional challenge was making sure not to amplify any of those tracks and performances, as Warner Bros. didn’t want the music to be heard by the crow in order not to risk losing impact. Such demands forced the mixing crew to mute practically everything as much as they could, which was also different from the way film producers film in different and controlled locations.

The fact of having a big crown in front makes the process way more challenging: the whole crew, film, picture, sound, etc., only have a few minutes to shoot, which increases the chances of not getting a lean and clean sound. In controlled scenarios, sound crew normally record up to ten different tracks, whereas in front of a live audience, they would need not only to prevent tracks from being heard but also to record the live audience for the desired effect.

Dialogue Editing and ADR With Gwen Whittle

Dialogue Editing and ADR With Gwen Whittle

If you recall the movies Tron Legacy and Avatar, they both, aside from having received Oscar nominations, have one name in common: Gwen Whittle. Gwen is perhaps one of the top supervising sound editors working today, which is why a lot can be learned from her work.

Gwen also did the sound supervision for both Tomorrowland (starring George Clooney and Hugh Laurie) and Jurassic World (starring Chris Pratt), and although she’s known for overseeing the whole sound editing process, she’s mentioned in several interviews that she’s highly fond of paying special attention to both dialogue editing and ADR sessions, as mentioned in previous articles by Enhanced Media in our blog.

Dialogue editing, as mentioned by George Lucas back in 1999 just before Star Wars: Episode 1 hit the theaters, is a crucial part of the whole sound editing landscape, and, apparently, even within this industry, nobody pays enough attention to it. In fact: dialogue editing is the most important part of the process.

So, what’s dialogue editing?

Dialogue editing, if it’s done really well, is, according to Gwen Whittle, unnoticeable —it’s completely invisible, it should not take you out of the movie, and you should pay no attention to it. Imagine taking all the sound from the set, take by take, just to take a much closer look at the dialogues captured for a specific scene.

Of course, not all dialogues recorded on the set sound the same —maybe the take was great, the acting was great, the light was great, but suddenly a truck was pulling over and an airplane happened to fly over the crew. It’s practically impossible to recreate that take as there are many aspects involved: air changes, foreign sounds, etc., and no matter how much you try to remove all those background noises, sometimes you need to resort to the ADR stage. In an ADR session, it all comes down to trying to recreate the same conditions that should apply to that particular scene.

Cutting dialogue often poses several challenges to sound editors, and it highly depends a lot on the picture department. A dialogue editor receives all the production from the picture department, everything that was originally shot on set, making sure that each mic has its own track. It’s the responsibility of the picture department to isolate each mic with its own track so dialogue editors can do their magic.

On set, the production sound mixer is recording anywhere from one microphone up to eight, usually, sometimes more, but the idea is for each actor to have their own mic and at least one or two booms. All this mix is passed onto the dialogue editing crew, isolating each track, matching the moving images just like the movie is supposed to be.

Once the dialogue editing crew has received the tracks, they listen to them and assess which parts can be used and which parts need to be recreated, organizing which tracks will make it to the next stage. Sometimes, since dialogues can be recorded using two different microphones such as the boom and the talent’s personal mic, sound editors can play with both tracks trying to make the most out of it whilst spotting which parts require an additional ADR session.

If there’s a noticeable sound, like a beep, behind someone’s voice, a dialogue editor can really get rid of that in case they need to; however, that’s not always the case. ADR sessions are quite familiar with the sound editing process. In films with a smaller budget, the dialogue process gets a bit trickier, since normally all tracks aren’t passed isolated onto the dialogue editing crew, so they need to tackle any hurdle in their tracks. Low budget films normally include more dialogue as they don’t have the resources to either afford fancy sets or include fancy visual and sound effects.

Do directors hate ADR?

Well, according to Gwen Whittle, not many directors are fond of ADR. David Fincher, for example, is. ADR is a tool. A powerful tool. And if you’re not afraid to use it, you can really elevate your film because it takes away the things that are distracting you from what’s going on.

ADR and dialogues.jpeg

Actors and actresses like Meryl Streep love ADR sessions because is another chance to perform what they just did on set. They see ADR as the opportunity go in there and try to put a different color to it, and it’s another way to approach what the picture crew just got on a couple of takes on set. Many things can be fixed, and even alter several lines. You can add a different twist to something. In fact, even by adding a breath to something, you can change the nature of a performance. It’s the opportunity for both the talent and directors to hear what they really want to hear.

*The images used on this post are taken from Pexels.com

4 Services That Allow Audio Post-Production Collaboration Seamless

4 Services That Allow Audio Post-Production Collaboration Seamless

Collaboration is not foreign when it comes to audio post-production. In fact, it is what gives studios constructive feedback, ideas, solutions and different perspectives to work on altogether, helping all parties involved produce better pieces of work.

Audio, sound, and video collaboration happens all the time. When it comes to audio and sound, for instance, it has never been so plausible to write a song with another individual on the other side of the world or to hire a full orchestra or session musicians to record music for the score and original soundtrack purposes.

In this post, we address some services and other software that make the whole collaboration workflow much easier, but more importantly, productive.

The Audio Hunt

The Audio Hunt is best known for being an online collaboration platform where hundreds of studio owners and audio professionals make their gear available for other colleagues to run their tracks through. How does it work? Imagine you want to run your mix through a specific piece of equipment/software. You will then be required to, first, open a account, find the piece of hardware you want to use, start a chat with the vendor, book the job depending on the fare (fares and fees vary depending on what type of hardware/software you want to use), and, finally, wait for the service to be completed so you can download the files.

Pro Tools Cloud Collaboration

Not long ago, Avid introduced Cloud Collaboration for Pro Tools in the Pro Tool 12.5 version. This allows Pro Tools users to share parts of projects, or the whole project if necessary, with other Pro Tools users around the globe without even having to close the application. It’s a rather fancy system that seamlessly integrates between different Pro Tools versions.

audio post production.jpeg

Pro Tools Cloud Collaboration gets rid of the traditional audio post-production collaboration process that involved exporting files out of the application followed by sharing them on different cloud services for other collaborators and editors to receive. Now, the 12.5 and above allows editors to collaborate with other Pro Tools users in a much quicker and simpler way.

Source Elements Source-Connect

In case you’re wondering what is Source-Connect, Source-Connect is what replaced the ISDN. Conceived as an industry-standard replacement, Source-Connect comes with a solid set of features for remote audio and sound recording and monitoring, allowing audio and sound professionals to undertake several aspects common in the audio post-production industry such as overdub, ADR and voice-over, regardless of whether the origin of these files took place anywhere in the world, over a decent internet connection integrated to their digital audio workstations.

Source-Connect works as an application, and it does not require complex digital audio workstations setups. It allows audio and sound professionals to work directly in the DAW of their preference, which ultimately allows them to harness the full set of features the application comes with.

Besides, Source-Connect comes with a built-in Pro Tools support, which is also compatible digital audio workstations that almost exclusively support VST plug-ins, including, but not limited to, Cubase, Nuendo, Pyramix, etc.

Audiomovers LISTENTO

Listento allows users to move low latency audio files from Digital Audio Workstations (DAW) to browse through the use of plug-ins. Imagine having a client who cannot physically visit your studio to listen and give you their insights on the final mix you’ve developed. By using Listento to play the mix directly from your workstation master track to the client’s browser, you eliminate such complication.

Listento seems to be still under development. One of the things the software is working on is the future implementation of a built-in chat to communicate with your client, allowing you to move away from third-party app messengers such as Skype or Google Hangouts to discuss the intricacies of your mix with the other individual.

Listento includes several transmission formats, such as:

  • PCM 16Bit

  • PCM 32Bit

  • AAC 128Kb

  • AAC 192Kb,

  • AAC 256Kb (MacOS only)

  • AAC 320Kb (MacOS only)

Additionally, Listento is a free plug-in; however, in order for sound professionals and audio editors to use it, they will be required to subscribe to Audiomovers in order for them to stream audio files directly from their digital audio workstations. Lucky enough, Audiomovers subscription tiers are quite affordable:

  • Weekly: $3.99

  • Monthly: $9.99

  • Yearly: $99.99

When sharing your files, sign up to your Audiomovers account to both send and receive the live stream. Send your client a link like if you were sharing with them a Google Sheets download link. And in case you’re still wondering whether you should pay one Audiomovers tier of service, the software comes with a one-week free trial.

A final word on collaboration: the fourth industrial revolution has come indeed with many pieces of software and hardware that has made possible to collaborate between professionals and studios. It is nonetheless as important to always nurture the collaborative spirit by being willing to work alongside other professionals in a specific workflow. This, of course, demands a more proactive and receptive attitude towards collaboration, otherwise, by not consider other perspectives, the chances of developing and learning something new are lower.

*The images used on this post are taken from Pexels.com

Sound For Documentary

Sound For Documentary

Since the emergence of the sheer array of affordable camera recorders, the rising prevalence of mobile phones with decent video cameras and the ubiquity of social media channels such as YouTube as one of today’s major media diffusion channels, it has never been this easy to produce and subsequently sharing documentary videos. If we were to take a much closer look at the whole production process, it would be easy to assert that sound is the weakest part of many of these videos. Although it is relatively easy to shoot and record with a camera regardless of its quality, the art of placing a microphone, monitoring and taking care of volume levels still remains an ambiguous puzzle compared to the other components that take place when shooting a video documentary.

In today’s post, we going to go through a general outline of practical techniques and an end-to-end guide to the primary tools for recording, editing and mixing sound for documentary audiovisual projects. Whether you are using a mobile phone, a regular video camera, a D-SLR, prosumer or a professional camcorder for shooting your project, the sound will always be an important part of the storytelling.

There are many ways in which tremendously good results can be achieved with consumer gear in many different circumstances; nonetheless, professional gear comes with extra possibilities. Here are some fundamental concepts directors and documentary producers need to bear in mind every time they want to take one of these projects.

Sound, as a conveyor of emotions - Picture, as a conveyor of information

Documentary shooting.jpeg

Think of the scene in Psycho of a woman taking a shower in silence. Now add the famous dissonant violin notes, and you get a whole new experience. That leads to consider the emotional impact of a project, in this case of a scene in particular. Sound conveys the emotional aspects of your documentary. It’s practically the soul of the picture. Paying special attention to sound, both during shooting and afterward in the studio, can make the real difference. No matter if you’re planning on doing a simple interview with plenty of dialogue, an enhanced, or rich sounding, in this case, the human voice is the differentiating factor between an amateur and professional project.

Microphone placement and noise management are key

The main issue with the vast majority of amateur sound recordings is the excessive presence of ambient and environmental noises from all kinds of sources, and a low sound level relative to the ambient noise. As a result, we’ve all seen how difficult it is to understand the dialogues, which is ultimately detrimental to the intended emotional impact. This common situation is one of the consequences of poor microphone placement. Directors and producers need to learn to listen to the recording and experiment with different microphones and different placement options. It all boils down to getting the microphone as close as practical to the intended sound, and as far away as possible from the extra noise that interacts in a negative way with the whole recording.

Additionally, if the documentary takes place outdoors, the chances of getting unwanted wind noise are hight, which is why the use of a windjammer to control wind noise is always a good idea. Regardless of whether you’re a professional or an amateur taking on a documentary audiovisual project, with a little bit of practice and research, you can craft outstanding sound recordings, irrespective of whether you’re recording with professional gear or your mobile phone.

Monitor your recording

In order to craft a compelling and professional recording, you need to properly set recording levels first —not too soft so sound doesn’t get lost in the overall noise; not too loud so you can avoid possible distortion. When recording, always monitor the sound you’re getting with professional headphones in order to avoid possible surprises in the edition. When using digital recording devices, it’s impossible to record anything beyond full scale, so abstain yourself from crossing this limit, as otherwise, the recording will sound hideous, unless your camera or the device you’re recording with as an automatic gain control to adjust recording levels.

The shotgun myth

There seems to be a myth regarding microphones. Apparently, some people firmly believe that the shotgun microphone reaches farther than other devices. This is not true. Shotgun microphone simply does not work like a telephoto lens. Sound, unlike light, travels in all directions. Of course, shotgun microphones work; they have their place, and they really come in handy in somewhat noisy environments, especially when you cannot be as close as the individual doing the talking as you’d like in an ideal scenario. That being said, shotgun microphones are far from performing magic. What they really do is that they respond to sound differently in terms of reduced level, null point, and coloration. Although they look impressive, plenty of sound professionals and directors choose to use different types of microphones for their documentary project.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 3

Mixing Audio For Beginners - Part 3

Here is the third installment of Mixing Audio For Beginners. If you’ve been following this illuminating compilation of the intricacies and the basics of sound and audio post-production, we’re gonna be addressing further topics taking it from where we left off in the last post about Mixing. Otherwise, we suggest you start off right from the very beginning. So, without further ado, let’s continue.

Ambiance

We mentioned last time that when editing dialogues in a studio through ADR, it is no less than pivotal to create the right environment for recording new lines. Every time a sound professional is tasked with re-recording lines and additional dialogue in a studio, they always have to pay special attention to several aspects that, if overlooked, could ruin the pace of the scene. Each dialogue edit inevitably comes with several challenges, like the gaps in the background environmental sound.

There’s nothing more unpleasant than listening to audio or a soundtrack where the background ambiance doesn’t match the action going on from one scene to the other. This phenomenon is highly common during ADR sessions, which is why, aside from helping the talent match the intensity each shot requires, sound professionals also need to edit the background sounds to fill any possible hole in order for the scene to feel homogenous.

The problem is when the production sound crew captures room tone on a specific location and then, once production is finished, the audio post-production crew needs to replace dialogue and fill the holes with room tone. Of course, there are tools to recreate room tones based on noise samples taken from existing dialogue recordings; however, it is indeed one of the most common tasks under the umbrella of audio post-production.

Sound Effects (SFX)

sound effects.jpeg

Whether coming across the perfect train collision sound in a library, creating dog footsteps on a Foley session, using synthesizers to craft a compelling spaceship pursuit, or just getting outside with the proper gear to record the sounds of nature, a sound effects session is the perfect opportunity for sound and audio professionals to get creative.

Sound effects libraries are a great source for small, and even low-budget, audiovisual projects; however, you definitely must not use them in professional films. Some sounds are simply too recognizable, like the dolphin sound every single time a movie, ad or TV show, shows a dolphin. Major film and TV productions use teams to craft and create their own idea of sound effects, which ultimately becomes as important as the music itself, for example. Think about the lightsaber sounds in any Star Wars movie.

After that, additional sounds can be created during a Foley session. Foley, as discussed in other articles, is the art of generating and crafting sounds in a special room full of, well, junk. This incredible assortment of materials allows foley artists to generate all kinds of sounds such as slamming doors, footsteps in different types of surface, breaking glass, water splashes, etc. Moreover, foley artists recreate these sounds in real time, which is why it is normal to have several takes of the same sound in order to find the one that best fits the scene —they are shown the action in a large screen, and then start using the materials they have at hand in order to provide the action with realistic sounds. Need the sound of an arm breaking? Twist some celery. Walking in the desert? Use your fists and a bowl of corn starch.

Music

Just like with sound effects libraries, when it comes to music, sound professionals have two choices based on their talks with production —they can either use a royalty-free music library, or they can, alongside music composers, create a score for the film entirely from scratch. Be that is it may, the director and productions are the ones who have the final say over what type of music they want to use in the project and, perhaps more importantly, where and when music is present throughout the moving images.

Sometimes video editors resort to creating music edits to make a scene more compelling. Other times, it’s up to sound professionals to make sure the music truly fits into the beat and goes in accordance with what is happening. The trick is to make the accents coincide with the pace of the on-screen moving images as the director instructed, and that music starts and ends where and when it’s supposed to.

Mixing

Assembling all the elements mentioned in the first two parts of this mini guide and this article into a DAW timeline and balancing each track and different group of sounds into a homogeneous soundtrack is perhaps where this fine art reaches its pinnacle. Depending on the size of the studio, it is possible to use more than one workstation and different teams working together simultaneously to balance the sheer array of sounds they’ve got to put in place.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 2

Mixing Audio For Beginners - Part 2

According to the previous article, we mentioned the importance of establishing an intelligent workflow in your audio production process. As per defined by the dictionary, the word workflow means “the sequence of processes through which a piece of work passes from its initial phase to total completion.” Such definition, of course, can be integrated with the audio post-production workflow phases in order to see how they work in different types of productions.

Pre-Production

A pre-production reunion is the meeting that gets you together with the production officials, whether it is the production company, director, or the advertising agency before the production starts. If you happen to be invited to this meeting, you can, of course, express your opinions to the production team, which might even save them hours and effort. If they seem to be open to receiving additional creative input, you could help develop the soundtrack at the concept phase. It means that your insights on the project can also have a certain impact on selecting the audio budget, which is always a positive thing. Remember: an hour of proper pre-production will spare you 10 hours of possible setbacks.

Production

Makeup artists make their magic, services are consumed, lights are turned on, actors deliver their best performance, video is shot, audio is recorded, computers are then used to animate existing action sequences, etc., and the pretty much the whole budget is spent during this phase.

Video Editing

Once the visuals have been recorded and created, the director works with the video editor in charge to pick the best footage and assemble the moving images in a way that tells a compelling story. Once the editing has been done, the audio editor or sound engineer will receive a finished version of the audiovisual project that, in theory, will not suffer further changes —that’s known as “picture lock.” This final version of the recorded footage can only be achieved once the deadlines have been met and the budget for those processes spent.

Creating The Audio Session - Importing Data

The video editor is responsible for passing onto audio professionals an AAF or an OMF export compiling all the audio edits and additional media so they can re-create, or create from scratch, their own audio edits. Once sound editors and audio professionals import the files, they will have a much clearer idea of what they’ve got to do.

At this point, audio editors also import the moving images and the edited video, making sure they are in sync with the audio from the aforementioned exports (AAF and OMF).

Spotting

During this phase, both the director or the producer sit down with audio professionals to tell them exactly what they want and, more importantly, where they want it. The entire film or video project is played, so audio professionals can take notes regarding the dialogues, the sound effects, the score, and the music, etc.

Dialogue

Dialogue is perhaps the most important part of the entire soundtrack. Experienced audio editors will always separate dialogue edits into different tracks, one per each actor. Sometimes, when audio is recorded on location, the audio person responsible for recording those tracks often records two different tracks for each actor —a clip-on mic and the boom mic. Once in the studio, the audio professional assesses both tracks and chooses the one that sounds best and is more consistent throughout the entire length of the moving images.

In case of coming across noise on the dialogue tracks, a common technique that sound editors employ is using noise reduction tools or similar software to repair that audio without compromising the final mix.

ADR

We’ve covered ADR before in previous posts, just in case you don’t know what ADR means.

Shooting film and ADR.jpeg

If, after having used the techniques mentioned in the last paragraph, the audio cannot be repaired through the use of noise reduction software, audio professionals always resort to performing ADR.

ADR means having the actors and the talent go to the studio to carry out several tasks, such as:

  • Replace missing audio lines

  • Replace dialogue that couldn’t be saved

  • Provide additional dialogue in case of further plot edits.

Actors have projected their scenes so they can recreate their lines. Normally, a cue is used to make sure they record in sync with what’s going on in the film. They also do four or five takes in a row, since the scenes are projected in a loop over and over (hence the word looping). The sound editor or audio professional then picks the best line and the best performance and replaces the original noisy/damaged take with the newer version. In order to match the intended ambiance, sound editors may use the same mich as the original take, but they will likely have to use further equalization, compression, and reverb to make the new performance be in synch with the timbre.

*The images used on this post are taken from Pexels.com

Mixing Audio For Beginners - Part 1

Mixing Audio For Beginners - Part 1

Have you ever wondered why your favorite films or TV shows sound so good? Or why TV ads and commercials are sometimes so much louder than other films and TV series? Or why that internet video that you like the sound so bad?

In this mini-guide, we want to go through the intricacies commonly associated with the creation of sound, audio, and soundtracks for both video and film. Crafting and mixing audio for film and video is a rather profound issue; covering all the basics would take hundreds of pages, due to the constantly changing nature of this business and the technology involved.

This first part covers basic aspects, a bit of background, some terms and terminology, and hopefully, will serve as a clear guide to understanding what mixing audio for video and moving images is about.

The World Of Audio For Video

Way back in the ages of the past century, recording engineers would often face a daunting dichotomy: they often had to make a career choice between either producing music or producing sound and audio for visuals and moving images, such as TV series, Ads, Films, etc. Since the aforementioned career choices were considered specialized assignments, they demanded specialized tools get everything done.

The inclusion of computerized digital audio systems in the late 80s made it possible, and definitely much easier, to use the exact same recording tools to produce and edit both music and soundtracks. Perhaps, if you’ve had any experience with audio post-production, tools, and systems such as AVID, NED PostPro and the early pro tools might ring a bell. That era marked the beginning of a new dynamism where terms such as convergence —where the lines of both worlds of audio and video production intertwine— started to become popular. As a result, the vast majority of engineers had to learn to do audio post-production sessions during the day and music sessions at night.

Be that as it may, the process has undoubtedly evolved throughout the years, and the modern and contemporary process of audio post-production has changed more than ever before.

Types Of Audio Post Production

In order for us to discuss the types of audio post-production, we need to start by making a necessary distinction between what is commonly referred to as audio and other types of soundtracks like radio commercials, audiobooks or the well-known podcast. Though a lot falls under the umbrella of audio post-production, we commonly mean by audio post-production as the audio especially crafter for a moving image or a visual component. Here are the most traditional forms:

Television

TV shows can be practically any length, but the vast majority of US TV programs are intended to last between 30 to 60 minutes. Many are produced by highly qualified and experienced TV studios in Los Angeles. As for Reality Shows, although these can be shot and recorded anywhere, they also require a good and experienced audio post-production team to mix both audio and video in a professional fashion.

Film

film making.jpeg

Films vary in their nature. Short films can span just a few minutes, whereas longer films can last several hours. This category includes today’s production for Netflix, HBO, and Amazon, as well as the famous traditional major studios. When talking about a film, it is also important to mention the financial aspect: independent filmmakers, known for producing small to no-budget projects still require an important dose of audio post-production. In fact, many sound engineers are fond of taking on these projects as it serves as the perfect opportunity to get some training prior to taking the big leap.

Commercials

Commercials include several types of visual projects. The term “commercials” often refers to TV commercials, infomercials, ads, promos, political ads, etc. The nature of the aforementioned types of commercials is basically known for its rather short format —today, it is possible to come across commercials ranging from 5 to 60 seconds in length. There are of course much longer commercials; however, it is rather expensive pretending to buy airtime for something longer than sixty seconds.

Video games

Video games are extremely fun. And crafting audio for video games is even funnier. The vast majority of top-quality games, also known as AAA games, have behind a dedicated audio post-production team responsible for creating and capturing the sounds that will be included in the game. This, of course, is absolutely unique to every single game, and certainly demands a daunting amount of work, requiring hundreds of audio files, as the game will demand soundtracks in different languages, which ultimately increases the number of files the audio team will need to manage.

Audio Workflow

The process through which a piece of audio work completes initiation to completion is known as a workflow. And although we will get into more detail in a subsequent post, a traditional audio workflow is comprised of the following stages: pre-production, production, video editing, data import, spotting, dialogue, ADR, ambiance, sound effects, music, mixing, delivery, summary.

*The images used on this post are taken from Pexels.com

 ADR: Tips And Tricks

ADR: Tips And Tricks

Automated Dialogue Recording, or ADR, is an essential part of every audiovisual project, but knowing its intricacies is key when it comes to becoming a proper filmmaker. ADR, as many people like to call it, is basically a method of adding dialogue to an already filmed scene. By superimposing dialogue that has already been recorded in a studio, or at least in an acoustically treated room or space, filmmakers can get past the challenges commonly associated with location dialogue. The problem with location dialogue is that it oftentimes results a bit hectic when environmental noises are too high and difficult to mute, the equipment doesn’t work the way it is supposed to do, or when the crew cannot get the right background noise.

When it comes to films, almost every contemporary Hollywood film has 50% to 70% ADR dialogue. ADR is no less than pivotal for the success of any film, and if executed the right way it can definitely salvage an entire film.

The Basics Of ADR

Before we get into more detail, there are several elements associated with ADR that filmmakers must bear in mind so they can plan and set up their recordings properly. By looping, existing playback of a repeating loop from the project is given to the recording crew while simultaneously recording new voices and dialogue. There are two different types of looping: audio looping and visual looping. With the latter, an actor listens to the location take or recording several times to understand the nature of that scene in particular and get a feel of the situation prior to recording the new dialogue. Once they’re ready to record, they will not hear the location take but will take a look at the scene to match lip synchronization. They always hear themselves over the monitors so they can hear the lines they’re delivering in real time.

Audio looping, on the other hand, will traditionally produce the most desirable outcome. However, it is important to mention, it is normally more time demanding. The session is carried out the same way as visual looping, cutting the video monitor and hearing the original dialogue track. The vast majority of ADR engineers are fond of using both techniques simultaneously. They always break up the looped lines into much smaller parts so they don’t lose consistency and synchronization. As for synchronization, for better sync when starting a line, ADR engineers record three beeps exactly one second apart each, so actors know when the first voice starts. This is known as an audio cue; like a metronome, so actors can start in the right moment under the proper rhythm of the line being recorded.

An ADR Recording Space

In sound and audio post-production, filmmakers have essentially more control over audio than they do when recording on location. The basic goal of each audiovisual project is to provide the audience with lots of experiences, and audio is not the exception. When it comes to ADR, the main idea is to get a really clear and clean ADR recording so ADR engineers can put the dialogue in an acoustically treated space with proper equalization.

ADR Equipment And Gear

microphone-audio-computer-sound-recording.jpeg

When recording ADR in an acoustically treated space such as an audio post-production studio, sound engineers and ADR professionals often try to use the same microphone the filmmaking unit used on location to capture the existing and original dialogue. The goal of ADR is to compellingly and adequately match the lines in both tonal characteristics and frequency response to the lines recorded on location. Since all microphones have different polar patterns and different frequency responses that yield different tonal nuances, it’s important, not only to use the exact same microphone—or at least a similar one—, but also to place them properly so acoustic features don’t get lost.

There are several digital audio workstations such as Pro Tools, Ableton Live, Logic, etc., that can help ADR engineers loop their recordings according to their needs. ADR demands, aside from microphones, other audio production software. A basic ADR toolkit looks like this:

  • Microphones

  • Digital Audio Workstation

  • Headphones

  • Preamp or Interface

  • Video Monitor

Microphone Placement And Delivery

Mic placement depends heavily on what type of microphones are being used. It is key to maintain a certain distance between the mic and the actor or actress to provide the recording with realism. Also, some ADR engineers are fond of using filters when deemed necessary. How an actor or an actress delivers the line is also pivotal for the success of the recording, as it affects the delivery itself and the tone of the ADR recording. Some actors tend to replicate the same movements being projected in the moving images, as it aids them in creating the exact same mood the filmmaker wants for that specific scene.

*The images used on this post are taken from Pexels.com

6 Tricks For Foley Sound Effects

6 Tricks For Foley Sound Effects

Foley artists are pivotal for any audiovisual project once it has been shot and edited, as they’re responsible for taking care of any possible missing sound, and, as described in a previous article, a crucial step in the audio post-production process is also what foley artists can do: perform and create sound effects to match the moving images being projected on the screen.

Common sound effects we always hear in movies for example footsteps, chewing, drinking, clothing movement, doors being opened, keys jingling, etc., are created through a set of different recording techniques and materials. Foley is more than simply manually editing sounds. In fact, it not only is more than that, but also more time efficient, and provides audiovisual projects with a much richer character and realism to other sounds in the film. Whenever a foley artist can’t create a sound in the studio, sound designers and sound editors will be always up for the task.

That being said, have you ever wondered what’s the best way to mimic or recreate the sound of a fight? The sound of fists going back and forth and hitting another body? Or how can you recreate the sound of footsteps in a snowy road in a recording studio? What’s the best way to mimic a sword fight? Here are some tips for coming with foley sound effects:

HOUSEHOLD SOUNDS

Wooden Creaks And Floors

People stepping on creaking wood and squeaking floors appear in practically every film you’ve seen. Footsteps on old floors or people walking over an old house porch are perhaps one of the most used scenes in films. Foley artists have at their disposal a sheer array of floors and objects to recreate these sounds. The advantage of using these accessories is that the sound, in this case, the creak or the squeak, can be to some extent controlled. Once Foley artists have developed a proper technique, coming up with these sounds and performing these creaks saves the picture a lot of time, as sound editors won’t need to edit all sounds on Pro Tools.

Fire

Fire is one of those sounds that also always appears in the vast majority of films. Foley artists often resort to accessories such as cellophane, potato chip bags, and even steel wool. The most common technique for recreating fire sounds is to scrunch up the accessory and then release it; the effect will be, of course, rather subtle, but when recorded with the mic closely a somewhat low-level fire sound will be achieved.

Cash

cash sound.jpeg

Money and stacks of cash have their own sounds as well. Traditionally, whenever a foley artist has to develop the sound of cash, they often resort to an old deck of poker cards or book pages. In order for foley artists to successfully achieve this sound is to use accessories, in this case, paper sources, with flexible and softer textures. In fact, the vast majority of the time, foley artists add actual bills in the middle of the paper roll, or on the top, or on the bottom, so they fingers actually brush its surface, creating the sound of cash.

ANIMALS

Horses

Galloping horses is one of those sounds whose technique to achieve it has practically remained untouched. Foley artist normally uses coconuts to recreate horse hooves, and it’s probably the most well-known foley accessory thanks to Monty Python and The Holy Grail. Several foley artists suggest stuffing the half coconut with some materials such as fabric in order to get a more realistic sound. Then, hit a compact dirt or whatever surface the horse is running on with the stuffed coconuts.

Bird Wings

Just like with horses, in order to achieve the sound of birds flapping their wings or taking off, foley artists normally resort to traditional and really orthodox accessories such as a vintage feather duster or gloves. It’s also important to experiment with different materials and perhaps heavier textiles to create a much thicker sound for larger species. An old feather duster can create a terrific effect if the foley artist can find a nice sounding one and hit it against all kinds of surfaces and objects to create different sounds.

HUMANS

Inhaling A Cigarette

smoking sound effect.jpeg

Ever wondered hoy films record the sound associated with a cigarette inhale? Foley artists often use saran wrap and other light materials to get this sound. By using saran wrap, you can get a similar sound to the fire sound mentioned above; however, it’s more subtle. Nonetheless, it is produced the same way as you would produce the fire sound: compress and then release, but make sure to do it controlled so you don’t overdo it. Make sure to have the mic close enough so you can capture the desired level of subtleness; otherwise, you may obtain a totally different sound.

*The images used on this post are taken from Pexels.com

An Introduction To Decibels

An Introduction To Decibels

What You Always Wanted To Know About Decibels

Many times in previous articles we’ve mentioned the word “decibel”. Of course, the world of sound and audio basically revolves around decibels. But what in reality does the concept of decibel entail? Here is our view on the decibels and how internalizing the concept can be useful if you either work as a sound designer, sound mixer, or even within the audiovisual industry. So, first things first: when it comes to trying to define decibels, there’s no better way than to put it this way: decibels are odd units, and there are at least three main reasons for such definition:

Decibels Are A Logarithmic Unit

When it comes to unveiling the intricacies of the definition of decibels, we first need to mention one of its aspects: a decibel is a logarithmic unit. Of course, our mind is not traditionally fond of logarithmic units, mostly due to the fact that we’ve become accustomed to deal with other types of units such as distances or weights, which are also present in our lives every day. Nonetheless, the concept of logarithmic units is highly useful, especially when we want to represent a sheer array of different figures or values.

If we were to take a value and make it 3, 4 or even 5 times bigger, we would see that the resulting figure would get incredibly huge on a logarithmic scale unlike on the traditional linear scale. Why? The reason behind this evident difference is that, while linear scales are based on multiplication, a logarithmic scale is based on exponentiation. Thus, if we were to increase the number 10 5 times, we would get to the value of 100,000. That indeed is really convenient whenever we want to get the full picture of a set of data ranging from dozens to even millions.

Some other units simply work fine on the regular linear scale, as we normally move within a rather small range of figures. That’s why it’s easy for us to measure the distance between cities; but what if we wanted to measure the distance between cities throughout the galaxy? (Of course, assuming we’re such an advanced civilization, that we managed to find life in other planets.) If we were to use a linear scale to represent the difference in distance between Los Angeles and Orion, the difference would be 1200000000000000 km, which is undeniably a really tough figure to look at; however, on a logarithmic scale, the difference would be just 16.8 log km.

The logarithmic scale offers a solution for this issue since it seamlessly provides an easy-to-understand figure while covering several order of magnitude. Like the cities used above as an example, some other natural phenomena can be expressed on a logarithmic scale, since they span through several orders of magnitude as well. Think of earthquakes, pH and, of course, sound and loudness. By using a logarithmic scale to measure and express some events, we can get a more accurate version of the models of nature.

Decibels Are A Comparative Unit

Once stated that decibels are a logarithmic unit, we have now a way to simply scale and measure different events, ranging from a simple whisper to a rocket take-off. Nevertheless, it’s not that simple. Every time we say something is 70dB, we are not making, in reality, a direct measurement —in fact, we are comparing two different values.

Decibels are the ratio between a specific measured value and a reference value. Simply put: decibels are a comparative unit. Stating that something is 30dB is as incomplete as saying that something is 30%. Thus, we need to specify the reference value we’re using, or, in other words, 20dB respect to what? What kind of reference value can we use then? And that’s what brings us to the third and last dimension.

Decibels Are A Versatile Unit

Given the fact that the vast majority of people associate decibels with sound, it’s clear that they cannot associate its measurement ratio with the value of any other physical property. These properties can be also associated with audio, like pressure or voltage, or may have little or even nothing to do with audio, like reflectivity. Decibels are found across all industries, not only audio. Take, for example, video, optics or electronics. So, after laying out all this information, what’s a decibel? A decibel is a logarithmically express ration between a pair of physical values.

audio mix console.jpeg

Screaming In Outer Space

No matter how much Star Wars tries to convince us of the possibility of actually conveying sound’s energy in outer space, reality dictates otherwise. Sound’s energy requires a physical medium to go and travel through. When sound waves disturb such mediums, there are actual measurable pressure alterations as the atoms end up moving back and forth —the louder the sound, the more intense the alteration is.

In Summary

A decibel is based on the logarithmic scale which, of course, works very well when displaying a large range of values. It is also a comparative unit that always uses the ratio between the measured value and the value used as a reference. Additionally, decibels can be used with any physical property aside from sound pressure. They also use reference values so the numbers being managed are more significant.

*The images used on this post are taken from Pexels.com

The Intricacies Of Mixing Sound For 360º Video

The Intricacies Of Mixing Sound For 360º Video

One of today’s most popular video formats is the 360º video. This format, which has been used by a plethora of influencers on YouTube (the platform in which the format gained its popularity), is seldom used outside social media channels, which is why there’s not a lot of information on how to edit sound for what is also called spatialized video. If you happen to have a project of this nature in mind, in this article we’ve shared the details on the intricacies of mixing audio and sound for this kind of video format.

The Visuals Will Determine Your Approach

When it comes to 360ª video, we’re basically talking about videos that represent the projected images as a single flat still. Thus, it is normal for viewers to perceive ceilings and floors as curved figures. In fact, rounded visuals suggest that 360º videos are a geometrical representation of a cylinder, which causes the seams to be curved; however, these also get flattened when they’re run through a 360º video editor software. So, under these circumstances, how do you even start outlining a plan to properly add audio to such complicated format? To being with, just like any other video format, a 360º video can also be split into different quadrants.

Think of quadrants as small parts of the whole sequence, and while it may look that sometimes there are duplicated quadrants, in fact, it’s just a visual representation of one quadrant split in two different, but equally long, parts. If you were to print the whole sequence as a linear chain of events, you would be able to fold the impression into a cylinder shape and see how each quadrant connects with each other—as it’s supposed to be. Having said that, approach each quadrant as a mini video. If you could separate each quadrant and add audio quadrant by quadrant, a spatialization software could also take it from that point on.

Organize Accordingly

Now that you’ve split the video into different quadrants, you can start thinking about the specific audio for each section. For specific audio, you don’t necessarily need anything else aside from a mono stem simply because you just want to pinpoint the sound. Some sound designers start by adjusting their mix template from the traditional 5.1 set of routing down to simply mono for both sound effects and dialogues. Music and score is an entire different world, and let’s leave it for later. Unlike typical dialogue recording, where a traditional edit would have just one track for each main role or character, the spatialized video focuses on quadrants. This totally goes against a sound editor’s normal workflow.

The same approach goes for sound effects, although some of the effects often cross quadrants. If that were the case, the best choice would be to crossfade uniformly across each quadrant in an attempt to match the timing of what’s going on in the action sequences.

The Music

When creating and editing sound for 360º videos, as a sound editor you often come across several complications, and music is not the exception. Music presents two different challenges; however, the most important thing is to always keep the current spatialization in mind for both music creation and post mix. If musical sounds, especially those created by the people appearing in the projected images —like someone playing an instrument—, cross different quadrants, it’s important to define what sound you want to pinpoint and place the instrument on its own mono stem.

The Mix

Mixing is pretty much like any other mixing you’re probably familiar with. Once you have split the video into quadrants and have been working on its unfolded format, the mixing should aim towards playing a rather balanced short. Since we are talking about a 360º format, some sounds will certainly draw viewer attention to specific parts or quadrants. That suggests that, when leveling each sound, the ones that should be highlighted ought to be played a bit louder in the mix.

sound and film editing.jpeg

360º video format will certainly become more popular for other projects and platforms. Spatialization, for instance, can definitely differ from project to project, and the amount of audio and sounds crossing different quadrants and overlapping other sections of the project will be also different. As for the music, its treatment may entirely alter the way sound editors approach this kind of projects. The most recommendable thing is to plan beforehand and study the project so that you don’t fail in the early stages. Bear in mind that mixing for spatialized audio, or 360º video, requires way more tracks than a traditional project, and sometimes, if the video requires splitting dialogue, different musical tracks, different sound effects, the mix session will likely be of massive proportions —which is why, if you’re into this format, you’ve got to be sure you have a system powerful enough for the total track count.

*The images used on this post are taken from Pexels.com

Is Music Important For Films And Ads?

Is Music Important For Films And Ads?

Something several of the most renown advertisements of this decade have in common is that they all involve music, and not simply in the rather worn-off form of a jingle. Think of John Lewis, for example, whose traditional Christmas adverts are as famous for the music they include as the whole storytelling. Vodafone, for instance, also set the Dandy Warhol’s song, Bohemian Like You, for success, as it managed to enter the UK’s top five charts.

Since the era of advertising started, one thing was clear: music and TV go hand in hand, but why do musical elements fit so well in ads and other audiovisual projects? Let’s find out.

It’s All About The Emotional Impact

There are plenty of original soundtrack songs that are simply stuck in our minds. They remind us of a certain time, individual or place in our lives. As discussed in other articles, music and musical elements are pivotal for any audiovisual project simply because, in order to process music, we use the same parts of the brain that are also the ones responsible for triggering emotion and memory.

Because of the human capability of emotionally associating a piece of music to something either positive or negative (which depends on the context and nature of the sounds), the associated memory tends to equal in strength that exact same emotion. The theory does not elaborate on whether it applies to moments in our everyday life, which it does, but rather on how this phenomenon resonates on songs in film, or music in radio, or ads. As for the type of music that triggers this particular area of the brain, its nature is somewhat special —it’s not just any type of music, though. As shown in this study, a group of Australians reacted to a series of audio clips, and their reactions suggested that different types of music can produce strong, but very different, types of emotional responses.

Different types of melodies, key changes, chords, etc., can produce and cause different responses. A string ensemble, for example, when playing sharp and long notes in a major key, were able to cause feelings related to happiness in almost 90% of the people assessed. On the other hand, a dramatic shift from major to minor tonality elicited the opposite feeling in the respondents —sadness and melancholy. An acoustic guitar is highly associated with calm and sophistication, as suggested by almost 83% of the respondents.

The aforementioned examples show how important it is for filmmakers and advertisers to have a deep understanding of the emotion they want to convey, but most importantly, the emotion they want to cause in the audience.¡ —and what type of music is more suitable for such a purpose.

And It’s Also About Telling The Story

Although music and musical elements on their own are an unquestionably powerful tool, they acquire a far more authentic effect when they accompany a story within a solid narrative arc. According to a study, and after having analyzed more than 100 ads to identify which ones were more correlated with long-term memory, the fact that music in TV ads, for example, becomes way more memorable when the music drives the action of the moving images being projected. For instance, if the lyrics match what is happening. The visual part is eye-catching enough sometimes; but when melodic music comes in, it sort of creates a hypnotic effect on the audience which triggers the areas of the brain previously addressed.

music partiture.jpeg

In a wider general sense, music and musical elements can definitely set the tone for a business’s or a brand’s personality, as well as to address a specific type of audience or portion of a specific demography. Adidas or Puma often target younger audiences when it comes to their activewear, for instance.

Creating From Scratch

Many filmmakers or advertises often choose existing tracks or songs from renowned artists; however, especially in filmmaking, many directors rely on composers to create an original soundtrack for a film. And it definitely works: Hans Zimmer, John Williams, Howard Shore, Ennio Morricone, James Horner, etc., are known for having created some of Hollywood’s best tracks for films. Who doesn’t remember Jaws for its soundtrack? Or Star Wars? Or Indiana Jones? Or Interstellar? Or the Lord of The Rings? The list goes on and on, but most importantly, the fact that movies serve as the perfect opportunity to craft a compelling and emotionally aggressive soundtrack, confirms the initial thesis that raises the question: is music really important in films and ads? Of course it is, and of course, it will always be. Without music, some parts of the action go missing. There’s simply no way to engage with an audience if an emotive soundtrack is not present. Music helps to tell the story; music is what people remember and what gets stuck in people’s minds.

*The images used on this post are taken from Pexels.com

Are Sound Effects Really Necessary?

Are Sound Effects Really Necessary?

The Golden Age (the 1930s - 1960s), taught us a lot about sound effects. Artists such as Orson Welles and Jack Benny left behind a great compilation of techniques and developments that are even used by today’s sound effects artists in their own productions and works. When it comes to live performances, live and studio recording and even workshops, sound effects artists have at hand a diverse array of manual sound effects, as they seem to be highly fond controlling and playing these over electronic sampler keyboards that come with recordings. Of course, plenty of sound effects artists also use high-tech electronic samplers and other backing track devices depending on the nature of the project they’re currently working on; however, there seems to be a consensus regarding the unique style of manual sound effects.

In the past, manual effects were not the only option —some sounds were easier to produce and to obtain, like cars, planes and nature sounds; but when it comes to sounds product of the manipulation of an object, of course, a lot of that sound is how you manipulate the object in question. That being said, a lot of experimentation is required to get the right technique to produce the desired sound. A lot also falls under the umbrella of what experimentation often means —you need to test microphones and how the effect sounds over them. Always trust your ears if you’re just getting started.

Are sound effects really necessary?

Sound effects allow filmmakers and audiovisual project directors to tell a compelling a story. Think of a drama: a well-crafted sound, sound effects included, makes every story better, irrespective of whether it is full action, music, dialogues, etc. Sound effects are important, yes, but in comparison to other formats such as radio, where dialogue contributes practically 80% to the drama, music 10% and sound effects 10%, then they are not that pivotal. But if we’re talking about a sci-fi film, well, that’s another story —in film, sound effects add realism and, unlike radio, where if a sound effect has been misplaced no one will notice, the slightest mistake can cause a disaster.

Is sound really as key as video quality when producing visual projects?

As mentioned above, poor sound and poor sound effects can ruin any production regardless of its quality. Understanding that sounds, especially high quality sounds in movies and even video games are closely related to also understanding the true nature of a successful filmmaker or game developer. Think of all the times audio and sound, or the lack thereof, have made you rate, either positively or negatively, any project in particular. Additionally, think how both elements, audio, and sound, determines the reactions an audience is able to digest about the moving images or frames they are presented. So, are sound and sound effects important in films? It certainly is.

Sound in Film

Films are often produced using three different types of sound: human sounds (voices), music and, of course, sound effects. All of them interact with each other throughout the whole project and are crucial for films to provide audiences and viewers with the realistic aspect they expect to, subconsciously, recognize. As mentioned in earlier articles, dialogue and sounds must perfectly sync with the actions being projected —avoiding delays and, of course, being realistic. If a specific sound doesn’t match the moving image on the screen, the realistic effect is gone and the action itself is not believable at all.

film tape.jpeg

There are several ways to achieve high-quality, realistic sounds, and that is by using original sound clips rather than uniquely resorting to sound libraries for the desired effect. Another way to provide an audiovisual project with realism is by incorporating the so-called asynchronous sound effects —which are often used as background sounds in films. These sounds, unlike the ones matching moving images, are not directly related to the action occurring in a moving image; they, of course, help a film be as realistic as it can and should be.

As for music, if you have ever asked yourself how important the implementation of music in film and audiovisual projects is, simply recall all the iconic film scores you’ve come across within the past —with all certainty, film music is perhaps one of the elements we remember the most about a film, and it’s one of the aspects that determines whether a film stands real chances of being successful or not. Movies like Steven Spielberg’s Jaws, and its iconic two-note melody, still brings back the memories of the big shark approaching its prey; what about George Lucas’s Star Wars? Years after the original trilogy was released, its musical score is still being used in today’s installments and is basically what builds up momentum during promotional affairs, and the list goes on and on. It’s practically impossible to simply overlook the importance of musical elements within today’s filmmaking.

*The images used on this post are taken from Pexels.com

Mixing Tips For The Balanced Soundtrack

Mixing Tips For The Balanced Soundtrack

Since we specialize in crafting the best sound for any type of audiovisual project, it was just about time for us to share some tips on how to achieve a balanced soundtrack and elaborate a bit more on what we at Enhanced Media do. The topics discussed pertain, of course, to the vast universe of film sound, but we will try to avoid oversimplifications while keeping it digestible, understandable and, why not, enticing. Be that as it may, this post is meant to be illustrative enough for you to develop your own knowledge regardless of whether it’s basic or not —learning something new will always be worth it.

Volume and Loudness

Too many people firmly believe that both volume and loudness mean the same thing; however, there’s indeed a crucial difference. When we speak about volume we mean the unit of sound that can be measured in decibels; loudness, however, is the perceived amount of volume. This depends, of course, on several factors such as frequency range and the noise. When it comes to crafting a balanced soundtrack—balanced also meaning homogeneous—both properties are no less than pivotal.

Simply put, when it comes to establishing how both terms interact with each other, we could assert that the physical volume must not be exceeded. In the vast majority of video editing programs and most multimedia software, the master volume is usually displayed with a decibels scale. It’s also important to mention that even though zero decibels can be achieved, zero does not mean inaudible, as some folks may think. Instead, it represents the maximum level before the digital clipping.

When mixing film sound, not only in the musical score but also in the dialogues, off voices and additional sounds, the master level may be at zero decibels, otherwise, the sound would go into what we call digital overdrive, cutting off the sine waves at the amplitude maxima —the highest and loudest rashes. This phenomenon is known as digital scratching, which, if you happen to work within the film or the audiovisual industry, is certainly known to you. Additionally, digital scratching ought not to be confused with confused with the popular term tape saturation, which is way older as it dates back to the time when magnetic tapes were used. Back then, and even today, tape saturation was rather a natural compression that would sound fuller and even far warmer than what is actually achievable today with software.

Traditionally, film score and music is used, and it has already been taken care of in the mixing room for immediate use and, chances are, likewise mastered —of course, that is, the soundtrack as if it was as made in the maximum volume, which allows sound editors to basically integrate it into the film or the audiovisual project. If that were the case, however, the only prerequisite for its integration is that no other plugin for artificial inflation is inserted in the master volume channel and the subsequent individual channels.

Music and its Sources

When it comes to choosing the music for your film, as a producer or as a sound editor you may as well use music from various sources. This means that you will be resorting to different soundtracks from different composers, studios, etc.; all of these tracks should have a clean level, but normally they happen to be uncommonly and excessively loud —which takes us back at the loudness and the perceived volume.

music editing.jpeg

Lamentably today, the vast majority of music producers have taken part in the loudness-war, aiming to pump up your music according to the motto: the louder the better. In the end, you just hear a shallow shriek, let’s be honest. This has its roots in the human mind, as humans and individuals seem to perceive louder musical sounds, in this case, film score or simply music, as better music. This course of action has left music so compressed and so pumped up, that nothing can be done to differentiate it from other lines of sound; however, bear in mind that reality dictates otherwise: the louder, the higher the chances for it to be utterly broken.

Today’s music, the vast majority of the stuff we hear on the radio has almost no dynamics —it’s just annoying, to some extent, but definitely loud! When it comes to filmmaking, film score and film music are supposed to support the project, not the other way around. But what if you were working on a purely technical video? Under these conditions, the goal would be to help the music support the storytelling of the images being projected. If the project begins rather quietly, the music should follow that same course of action. If there’s a sudden increase in tension, the music might as well be used to accentuate that change. Thus, you merge both volume and loudness into perfection.

*The images used on this post are taken from Pexels.com

The Evolution Of Film Sound: Music

The Evolution Of Film Sound: Music

Understanding today’s status of what is traditionally referred to as film sound demands certain background. How did we get here? That’s a question all film sound editors ask themselves at some point in their careers. Here we have compiled several important points of reference to understand how sound has evolved throughout the years.

A lot falls under the umbrella of film sound —music, movie image, silence, foley, dialogue, etc., are some of the elements that are directly affected by sound as an abstract term. The industry has learned a lot about how music and film score are totally under different circumstances than they were 50 years ago. Of course, this evolution has been determined by the use and constant development of technology, flavored by the ongoing use of social acceptance.

If we were to fast forward 100 years in time it would be really challenging to tell where we will be and would be equally hard to tell what composers will have made a name for themselves in the history of the film score to stand out and be dubbed as legends within the industry. Instruments, likewise, have evolved. Think of the Waterphone, for example —that acoustic instrument, highly popular in older films, mostly used in a moving image. Today, there is a plethora of sound effects and sound effects libraries that can imitate that exact same sound, and even improve it through the use of synthesis. History has taught us that music as a crucial film sound element has evolved a lot, not only in terms of sound effects but also in its own interpretation. Composers such as Hans Zimmer and even Walt Disney’s have completely changed the way we visualize and digest music.

Hans Zimmer, for example, is known for having taken part in a lot of successful audiovisual projects; but the majority of his work has been a major game changer within the industry —Hans Zimmer is a true artist simple because how his work blends with the images being projected. Writing music for a project and for moving images is something we see every day, but changing the entire mood of a film and its moving images is a complex thing only a true artist can achieve, especially if it’s done whilst captivating audiences of today’s modern society.

Walt Disney’s composers, on the other hand, kind of took a leap of faith when they decided to integrate music and musical sounds into their projects. They thought initially that including music would not be accepted by a modern society and middle-aged people, but the experiment ended up being positive, especially for younger audiences at that time. So, by incorporating music into film, the film industry helped man alter the way films are portrayed these days. Had they not taken the risk of using movies in the films, probably we wouldn’t be hearing of composers and original soundtracks.

And although many seem to agree that, at some point, someone would have done exactly the same, chances are Walt Disney managed to integrate music into films simply because of the name he had made for himself in the industry at that point in time, so, chances are, when the time came, no one even questioned him about what he was planning to do; but most importantly, without a big company behind that idea, who would have managed to pull that off?

The inclusion of music and musical sounds in films brought along subsequent jobs and positions within the industry such as foley artists, for example, and the recording of realistic sound effects in films and moving images. Once the idea of giving music and sound effects a key role within the whole conception and production of film, other ideas followed, but most importantly, other industries started to develop themselves according to the pace at which filmmaking was developing: software, music instruments, technology, Foley techniques, etc., all of them leaned towards filmmaking, not to mention that such development also allowed filmmakers to explore other genres such as sci-fi, 3D animation, fantasy, etc.

walt disney sound effects.jpeg

It would be fair to assert that the evolution of sound was determined by social changes as well, not only the pace at which technology allowed the industry to develop. Music, as a key element of film sound, will certainly get to new shores —new instruments, new technologies, new composers, new ways of recording and merging music with moving images, and, why not, maybe new genres. And although films are considered essentially a visual experience and a visual medium —that is, more sight and sound—, the fundamental importance of the latter as a part of the storytelling process of any film plays a pivotal role from the beginning till the end. It definitely changed the way filmmakers used to think about the nature of cinema.

*The images used on this post are taken from Pexels.com

What is Foley?

What is Foley?

When we speak about Foley, we always refer to a sound effects technique for both live effects or synchronous effects. This technique was initially named after Jack Foley, a famous sound editor at the renowned Universal Studios. Basically, foley artists strive to match all kinds of live sounds effects with what is actually going on in the film. These sounds are done manually using inanimate objects and people.

This technique is an excellent way of adding the subtle sounds into the film, the sounds that often the production overlook due to all the intricacies involved during the shoot —the noise of the saddle every time a rider gets on his horse or the rustling of an individual’s clothing are just some example of those sounds that are not considered by the production up front, but those are necessary to provide the film, or any other audiovisual project, with that distinctive touch of realism. Otherwise, by using other methods, it would be rather difficult to achieve the exact same level of authenticity.

A good foley artist often takes the place of the actor with whom production is trying to sync effects, if not, well, the sound ends up lacking the necessary level of authenticity and realism to be convincing enough. Most foley artists, the successful ones, of course, are audiles —they can look at any inanimate object and picture what of sound they can get from it. A foley crew, on the other hand, includes several individuals. The walker, or the one who makes the sound, and a technician, maybe two, responsible for recording and mixing the sounds. The vast majority of foley recordings take place what other people would call storage areas —rooms full of laundry, pieces of metal, rocks, stones, a sandbox, metal trays, empty cans, forks, knives, broken guns, anything.

Foley artists start by watching the film in order to determine which sounds need to be added or replaced, which ones can be subject to some level of enhancement, and which ones they can get rid of. At that point in time, the sound on the film is composed mostly by the dialogue and sound effects crafted during the production of the project. These sounds are actually recorded on a guide track, often dubbed as a production track. Later on, technicians focus on other sounds that may be subject of playing a minor role in the film: crowd noises, the musical score, dialogue replacement (or dialogue re-recordings through ADR), other sound effects and, last but not least, sound designed effects.

It is not rare to have up to 80% of a film’s sound altered, edited or simply customized in some way once the movie has been shot. Some sound effects are easy to craft and can be added by resorting to pre-recorded and existing audio libraries; however, many sounds remain unique to every project. Think of footsteps, for instance. As foley artists watch the film, they identify which sounds they need to craft and create, and imagine ways to pull them off. Additionally, when it comes to noises, foley artists have to consider other factors such as the origin of the noise or who is making that specific sound and, most importantly, in what kind of environment. Some noises or sounds are too difficult for just one take, so foley artists must carefully combine different noises from different sources to perfectly represent the sound they are looking for. In some cases, foley editors can ultimately digitally alter these sounds to match what the film is projecting. In the foley studio, you will find all sorts of surfaces for simulating all kinds of footsteps, a splash tank, all kinds of chambers for simulating different variations of echoes, and the mixing booth where foley engineers record and mix everything. The process is quite simple: foley artists spend hours orbiting microphones and watching a huge screen as they try to synchronize the noises they are producing with the action being projected.

But why is foley so important anyway? Well, the vast majority of a film’s soundtrack is added during the post-production stage for several reasons:

Some situations are not real during filming

Think of swords in Vikings fights or punches that don’t make contact with the skin in fist fights. These sounds are, of course, added during post-production and have to really embody what the action requires.

fighting sound effects

Some CGI simulations are not from this world

The vast majority of CGI elements are ultimately not from this world: big monsters, lightsabers, flying vehicles, etc.

Some sounds cannot be recorded on set

Imagine recording on set, in just one take, the sound of a bird’s wings when it jumps into the air or a letter being taken out of the envelope. This sounds, even though we never pay attention to them in real life, are needed to provide the project with realism.

*The images used on this post are taken from Pexels.com

How Sound Helps You Tell Your Stories

How Sound Helps You Tell Your Stories

sounds that serve as the framework within which filmmakers and directors create a specific atmosphere. When coming up with an audiovisual project regardless of its nature, sounds are always carefully introduced in order for them to represent what is happening during a particular scene: what kind of actions the performers and the characters are engaged in, and where does that situation, in particular, take place.

The importance of creating an enticing atmosphere

Sound design is full of all kinds of nuances, and these vary from project to project; however, the necessity for filmmakers to understand the material remains. A good way for producers, directors, filmmakers, etc., to start editing sound for a production is by actually going through the whole script whilst figuring out the nature of every scene therein. Thus, aspects such as background sounds receive a much brighter connotation as well as the actions taking place. This is a good way to start elaborating on sound effects and possible atmospheres that could pertain to a specific scene.

What about music and dialogue?

Despite the fact that both music and dialogues are pivotal for creating an enticing audiovisual project, these two traditionally come to mind before considering the sound design part. And whilst these, as mentioned, are unequivocally vital in providing the plot with guidance, they remain as, perhaps, the most obvious elements in a project’s sound design. Since every scene is different, these require a subtle, yet specific, manipulation of both sound and sound effects to make them feel real and complete. Of course, dialogue and music by themselves are simply not enough to build the framework within which films are conceived and constructed.

Background and atmospheric sounds and noises drive the plot and allow the audience to clearly understand where a particular scene is taking place. Normally, these sounds are perceived by the subconscious given their quiet and repetitive nature; however, they are essential to every single scene because of the drive the audience. If both background and atmospheric noises were removed from audiovisual projects, scenes would end up feeling and being perceived unfamiliar and even unnatural by the audience due to the lack of realism. Scenes taking place on busy and crowded streets, for example, always include iconic noises like car horns, engines, indistinct chattering, etc., whereas scenes taking place in the woods include birds, wind, grass blowing, etc. Both examples are familiar to the audience simply because of the atmosphere created through the inclusion of background noises. Otherwise, it would feel weird to the audience.

How does action sound?

So, we just covered how sound is meant to interact on a subconscious level, the next part of sound design is helping the audience understand what the characters being played are doing in a particular scene. This is also possible and done through the inclusion of action sounds. When we speak about action sounds we are basically talking about a rather more subtle group of sounds like those we hear when characters are holding something or their clothing, or when characters interact with other character or other inanimate objects. If a character is in an action-filled scene, like a physical confrontation or a fight, then sound designers would need to include the sounds of impacts, punches, clothing being moved, etc. Sound designers actually spend a lot of time during this kind of scenes to make sure that all the sounds that the audience would hear if the fight or confrontation was real, match what is being projected and played by the performers. During a fight, it is common to hear some hits louder than other, or even a combination of different types of impact sounds. Thus, the audience perceives the scene in a more realistic way, which is why the vast majority of sound designers strive to include and use effects that resemble a real sound.

sound effects in movies.jpeg

This same principle applies to the interactions characters have with inanimate objects. If a character is manipulating something made of metal, like a gun or a hammer, the sound designer will need to add that sound by adding the sound of a person touching the object itself. The same happens when characters use computers or mobile phones and we hear the keys being pressed and mobile phone beeps. These are sound that possibly go unnoticed by the audience in real life; however, by including them into a particular scene the audience ends up being driven by the storytelling component. A well-crafted atmosphere includes these and the sound mentioned in the first part of this article. Although quiet and subtle, both atmospheric and action sounds are key for providing the audience with a compelling narrative. These are small elements whose tremendous value is reflected every time the audience remains engaged throughout an audiovisual project.

*The images used on this post are taken from Pexels.com

What Is Sound Design?

What Is Sound Design?

Despite the fact that sound design has always been a key component in the film and audiovisual industry, it still holds an air of mystery. In fact, the most common myth about sound design is that it is all about creating new sounds, which of course is not true. At least not partially. One may easily assume that sound design is all about coming up with enticing and neat sound effects; however, assuming that would not be fair with those who coined the term during Star Wars and Apocalypse Now. When we refer to sound design as a term, we need to resort to those films, as Ben Burtt and Walter Murch —Star Wars and Apocalypse Now respectively— found themselves working alongside directors who were not just trying to include attractive and powerful sound effects in their projects as an additional element of the structure they had already put in place.

It was by exploring the boundaries of sound: sound effects, music, dialogues, etc., that sound began to play a pivotal role in storytelling, shaping the picture in most cases. These experiments resulted in something different from what directors and audiences were used to. In fact, nowadays soundtracks change the way people and directors understand film sound, yet there seems to be a rather unorthodox conception of what a well-crafted sound design is. For many people, a well-crafted sound design is about recording high fidelity sounds and well-fabricated vocalizations like explosions or alien creatures; however, that is far from doing justice to the term. A well-orchestrated and recorded musical composition provides minimal to zero value if does not interact seamlessly with the film; having performers, actors, and actresses say a myriad of dialogues in every shot is not necessarily acting in the betterment of the production.

Sound, regardless of its nature, provides value and acts in the betterment of a film when it becomes part of the storytelling, when it resonates with what is being projected, when it changes dynamically over time, and when it makes the audience experience sensorial feelings. Filmmakers should actually pay special attention to sound every time they have an idea in their minds. Instead of simply considering sound as a mere component, filmmakers should strive to fabricate sounds, either on set or in a studio with a talented sound designer or composer, making it a pivotal contribution to influence their projects in different ways. Think of films like Citizen Kane, Star Wars or Once Upon A Time In The West. These films were thoroughly thought and produced in many ways, the sound being one of them, yet no sound designer appears on the credits.

sound design for films.jpg

But that does not mean that every film should strive to mimic what the aforementioned films have done in terms of sound; however, lots of audiovisual projects may actually learn from them instead. Sound mixing varies from film to film, and there are films whose sound design is astonishing. Now, there are several sound mixing practices that actually take place way long before production begins: directors often have their actors and actresses hear the words around their characters, making it possible for them to play their roles in a much better way.

Other directors actually build their stories around the whole role that sound plays within the storytelling framework, although many others still have a lot to learn about the potential for sound. There seems to be a paradigm around the role of sound —like a generally accepted idea that suggests that good sound is only meant to enhance the images and the visuals being projected. Such paradigm would only suggest that sound is actually a slave within the project, and its implications would be less important and complex than they would be if directors could let sound act as a free element during the whole process.

Another misconception around the topic of sound and sound design suggests both directors and filmmakers start to pay attention, or at least think seriously, about sound is when the project is approaching its final stages and when the filmmaking process and the structure of the film are practically over. Many would say: how is a composer supposed to come up with an idea unless he is able to catch a glimpse of the final product? Some may argue that there is nothing wrong with this practice, and yes: sometimes it works uncannily well. But what’s the point in actually considering sound like a collaborative, functional component of filmmaking if it is only taken into account and addressed once the other processes are over? In order for it to reach its full potential, directors and filmmakers should not disregard the possibility of understanding the whole project within a collaborative framework, allowing sound to exert some of its wonders on the filmmaking process.

*The images used on this post are taken from Pexels.com