Representing sound

Quick links

3.3.1

Number bases

3.3.2

Converting number bases

3.3.3

Units of information

3.3.4

Binary arithmetic

3.3.5

Character encoding

3.3.6

Representing images

3.3.7

Representing sound

3.3.8

Data compression

Syllabus content

Content   Additional Information

Understand that sound is analogue and that it must be converted to a digital form for storage and processing in a computer.

 

 

     

Understand that sound waves are sampled to create the digital version of sound.

 

Understand that a sample is a measure of amplitude at a point in time.

     
Describe the digital representation of sound in terms of:
• sampling rate
• sample resolution.
 

Sampling rate is the number of samples taken in a second and is usually measured in hertz(1 Hertz = 1 sample per second).
Sample resolution is the number of bits per sample.

     
Calculate sound file sizes based on the sampling rate and the sample resolution.  

File size (bits) = rate x res x secs

rate = sampling rate
res = sample resolution
secs = number of seconds

 

Starter 1

This is monochrome image (2-bit)

Calculate the file size using the information given in the previous work.

This is a colour (3-bit image) (There are no marks for comments on the quality of the image please!)

Calculate the file size using the information given in the previous work.

How many different colours can a 3-bit image have?

Explanation

Digital audio

This is the graphic representation ofsome digital music stored on a PC. in fact this is the graphic representation of part of the tune shown under the image.

Digital audio is collected from the real world by sampling the sound periodically (but very often) and converting the sound at that time into a number. If this is done often enough and quickly enough then the stored sound will be a reasonable representation of the actual sound.

We hear sound as our ears process the changes in air pressure. The sound that made the air pressure change can also be recorded via a mircophone, via an Analogue to Digital converter (ADC)

The sound is sampled by the computer.

Digital audio quality

Factors that affect the quality of digital audio include:

  • sample rate - the number of audio samples captured every second
  • bit depth - the number of bits available for each clip
  • bit rate - the number of bits used per second of audio

Sample rate

The sample rate is how many samples, or measurements, of the sound are taken each second. The more samples that are taken, the more detail about where the waves rise and fall is recorded and the higher the quality of the audio. Also, the shape of the sound wave is captured more accurately.

Each sample represents the amplitude of the digital signal at a specific point in time. The amplitude is stored as either an integer or afloating point number and encoded as a binary number.

A common audio sample rate for music is 44,100 samples per second. The unit for the sample rate is hertz (Hz). 44,100 samples per second is 44,100 hertz or 44.1 kilohertz (kHz).

Telephone networks and VOIP services can use a sample rate as low as 8 kHz. This uses less data to represent the audio. At 8 kHz, the human voice can still be heard clearly - but music at this sample rate would sound low quality.

Bit depth

Bit depth is the number of bits available for each sample. The higher the bit depth, the higher the quality of the audio. Bit depth is usually 16 bits on a CD and 24 bits on a DVD.

A bit depth of 16 has a resolution of 65,536 possible values, but a bit depth of 24 has over 16 million possible values.

16-bit resolution means each sample can be any binary value between 0000 0000 0000 0000 and 1111 1111 1111 1111.

A table showing bit depth and binary value.

32,768 + 16,384 + 8192 + 4096 + 2048 + 1024 + 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 65,536

24 bit means the maximum binary number is 1111 1111 1111 1111 1111 1111 which creates 16,777,215 possible values.

When an audio file is created it has to be encoded as a particular file type. Uncompressed audio files are made when high-quality recordings are created. High-quality audio will be created as a PCM and stored in a file format such as WAV or AIFF.

Bit rate

The bit rate of a file tells us how many bits of data are processed every second. Bit rates are usually measured in kilobits per second (kbps).

Calculating bit rate

The bit rate is calculated using the formula:

Frequency × bit depth × channels = bit rate

A typical, uncompressed high-quality audio file has a sample rate of 44,100 samples per second, a bit depth of 16 bits per sample and 2 channels of stereo audio. The bit rate for this file would be:

44,100 samples per second × 16 bits per sample × 2 channels = 1,411,200 bits per second (or 1,411.2 kbps)

A four-minute (240 second) song at this bit rate would create a file size of:

14,411,200 × 240 = 338,688,000 bits (or 40.37 megabytes)

Compression

Compression is a useful tool for reducing file sizes. When images, sounds or videos are compressed, data is removed to reduce the file size. This is very helpful when streaming and downloading files.

Streamed music and downloadable files, such as MP3s, are usually between 128 kbps and 320 kbps - much lower than the 1,411 kbps of an uncompressed file.

Videos are also compressed when they are streamed over a network. Streaming HD video requires a high-speed internet connection. Without it, the user would experience buffering and regular drops in quality. HD video is usually around 3 mbps. SD is around 1,500 kbps.

Compression can be lossy or lossless.

Lossless compression means that as the file size is compressed, the audio quality remains the same - it does not get worse. Also, the file can be restored back to its original state. FLAC and ALAC areopen source lossless compression formats. Lossless compression can reduce file sizes by up to 50% without losing quality.

Lossy compression permanently removes data. For example, a WAV file compressed to an MP3 would be lossy compression. The bit rate could be set at 64 kbps, which would reduce the size and quality of the file. However, it would not be possible to recreate a 1,411 kbps quality file from a 64 kbps MP3.

With lossy compression, the original bit depth is reduced to remove data and reduce the file size. The bit depth becomes variable.

MP3 and AAC are lossy compressed audio file formats widely supported on different platforms. MP3 and AAC are both patented codecs. Ogg Vorbis is an open source alternative for lossy compression.

Not all audio file formats will work on all media players.

Digital video

A digital film is created from a series of static images played at a high speed. Digital films are usually around 24 frames per second but can be anything up to around 100 frames per second or more.

Films have a frame rate per second (fps). This is similar tosample rate. HD film is normally 50 or 60 fps. This can also be measured in frequency (Hz). TV and computer screens have a specification in Hz to indicate the frame rate they support.

Digital films also have a bit rate that accounts for the total audio and image data processed every second.

Video compression

Videos are compressed in order to:

  • reduce the resolution
  • reduce the dimensions
  • reduce the bit rate

Data lost during the compression process can cause poor picture quality or even random coloured blocks that appear and disappear on the screen. These blocks are called artefacts.

Examples of popular lossy video file formats include MP4 and MOV. Video file formats use codecs to carry out compression algorithms on the video's picture and audio data.

Codecs and compression algorithms

Codecs are programs that encode data as usable files, whetherimagesaudio or video. Compression codecs are designed to remove data without losing quality (where possible). Algorithms work out what data can be removed and reduce file size.

Run length encoding (RLE)

One of the simplest examples of compression is RLE. RLE is a basic form of data compression that converts consecutive identical values into a code consisting of the character and the number marking the length of the run. The more similar values there are, the more values can be compressed. The sequence of data is stored as a single value and count.

For example, for a minute of a scene filmed at a beach there would be similar colours on screen for the duration of the shot, such as the blues of the sky and the sea, and the yellows of the sand. Each frame would contain similar data so the file doesn't need to record all the colours each time. Compression software understands that it's seeing the same colours over and over again so it can recycle some of the data it has captured before, rather than storing every detail of every frame.

 

 

Exercise

Use this link and explain in your book the difference between lossless and lossy compression in as few words as possible.

 

Extension

 

3.1 Fundamentals of algorithms

3.2 Programming

3.3 Fundamentals of data representation

3.4 Computer systems

3.5 Fundamentals of computer networks

3.6 Fundamentals of cyber security

3.7 Ethical, legal and environmental impacts of digital technology on wider society, including issues of privacy

3.8 Aspects of software development

Glossary and other links

Glossary of computing terms.

AQA 8520: The 2016 syllabus