By James Reinholm

Although most people probably haven’t given it much thought, the invention of the coaxial cable was probably one of the most important discoveries ever made. Telecommunications and radio broadcasting would not exist as they are today without the invention of the coaxial cable.

Coaxial cables first started to appear in various applications back in the 30′s as a need developed for more efficient cabling systems with less interference. As more coaxial cables were used, standardised versions became available. Probably the most important parameter used in coaxial cabling is the characteristic impedance.

This is the main electrical characteristic that determines the level of power transfer and attenuation along the cable length, and also controls the amount of reflected and standing waves. Any type of coaxial cable is typically chosen based on the characteristic impedance. The main consideration is that impedance levels should match both at the transmitting and receiving end.

Although there are many standard impedances levels, the most common ones by far are the 50Ω and 75Ω impedances. These two standards are used for most coaxial cable applications, but other standards are also available in lesser quantities. For ordinary signal and data transmission applications, the cable that almost always chosen is the 50Ω type, while the 75Ω type is almost exclusively used for video signal and high-frequency RF applications, such as VHF (Very High Frequency) and UHF (Ultra High Frequency).

Cable impedance is one of the characteristics that defines how well a cable will transmit signals. Just as light waves can be reflected when traveling from one medium to another, electric waves can be reflected when traveling through conductors with different impedances. The ratio of the reflected wave to the signal input wave in a coaxial cable is given by:

Vref / Vsig = (Zload – Zcable) / (Zload + Zcable)

Zload is the impedance of the load at the end of the cable, and Zcable is the impedance of the cable itself. From this, it is easily seen that the impedance of the load must match the impedance of the cable in order for the reflection to become zero and cancel out. Generally, the impedances should be chosen so they are equivalent for both the cable and load. Similarly, the source of the signal should have an impedance equivalent to the impedance of the cable. When the impedances match, the signal will be transferred to the end of the cable without it reflecting backwards down the cable.As the length of cable increases, it becomes more like a transmission line, and it becomes more important to make sure the impedance of cable is matched with the receiving end termination impedance. For shorter cables that have lengths that are less than about 1/10 of the wavelength of the carried signal, the transmission line characteristics don’t apply. In these cases, there is usually no need to match impedance levels, and the basic principles of circuit analysis can be employed instead.

For higher frequencies (RF), the wavelength decreases proportionally, and eventually, transmission line theory will have to be used in addition to circuit analysis. In these cases, it is also important to minimise reflections because they can result in standing waves, which can cause additional power losses and even dielectric breakdown for high power signals.

Transmission line theory is a very complex subject and as such is beyond the scope of this article. However, the essential lesson is simple: just match the source impedance with the cable impedance and also match this with the value of the receiving end’s impedance. This not only achieves minimum signal reflection, but in doing so it also maximises power transfer.

When using the transmission line concept, a coaxial cable can be represented as a series of capacitances and inductances. In this way, it behaves somewhat like a low-pass filter, where the cable passes most of the signal at lower frequencies and attenuates the signal at higher frequencies.

For frequencies above about 1MHz, the characteristic impedance of a coaxial cable line depends only on the dielectric constant of the inner insulator and the ratio of the diameter of the inner conductor to the inner diameter of the outer conductor (shield). Unlike the impedances for individual capacitors and inductors, the coaxial cable impedance is independent of cable length and frequency for frequencies above 1MHz. The impedance is about the same both for short and long cables, and for 2MHz and 20MHz signals.

In deriving the formula for characteristic impedance resistance per unit length can be neglected. The impedance can be found from the values of capacitance per unit length (C/h) and the inductance per unit length (L/h):

where Zo is the characteristic impedance.

The shunt capacitance per unit length, in Farads per meter, is given by:where D is the diameter inside diameter of the shield,

*d*is the outside diameter of inner conductor,

*h*is the length of the cable, and e is the dielectric constant of the insulator.

The series inductance per unit length, in Henrys per meter, is given by:

where μ is the magnetic permeability of the insulator.

When the square root of the ratio L/C is calculated, the result is:

where er is the relative dielectric constant relative to free space. This calculation assumes that er doesn’t vary much over the operating range of the cable.

There are a few reasons why 50Ω and 75Ω were chosen as the standard characteristic impedances. For a coaxial cable with air as the dielectric, the minimum attenuation occurs at around 75-77Ω. As other dielectric materials are used, the lowest attenuation drops to a value between 52–64 Ω. The second consideration is power-handling capability, which is maximized at about 30Ω regardless of the dielectric used.

There is negligible power along the cables when used for antennas and the reception of high frequency radio waves, so the cables are optimized for attenuation loss in high frequency RF applications, such as UHF or VHF. This is the main reason for using a standard 75Ω type of coaxial cable, and it is deployed almost exclusively for these higher frequency RF and telecommunication applications. 75Ω is also convenient for dipole antennas because of their matching impedance levels, which minimize reflection losses.

50Ω coaxial cables are the most commonly used coaxial cables and they are usually found in radio transmitters and receivers, lab equipment, and in networking applications. The standard 50Ω was chosen because it was a compromise between power-handling capability and attenuation (approximately midway between 30 Ω and 77 Ω). 50Ω impedances also work out well because they happen to be a close match to the drive impedance of a half-wave dipole and quarter-wave monopole.

Another reason for choosing 50Ω as a standard impedance is because the diameters of the inner and outer conductors are at their “natural“ sizes for 50Ω impedance levels (the cable dimensions become irregular at other levels). The diameters for the 50Ω size of cable make it much easier to manufacture.

Although 50Ω and 75Ω cable types are almost always used, it is still possible to obtain other cable impedances for special applications. For digital signal transmissions, a higher impedance is normally used, such as the RG-62 (93 Ω) type, because of its lower capacitance per unit length. The power handling and attenuation factors are unimportant compared to the capacitance of a cable, where higher values will slow down edge transitions in a digital signal. Lower values are also available, such as the 25 ohm miniature RF cable, which is often used in magnetic core broadband transformers.

Molex offers a wide range of 50Ω and 75Ω cable assemblies, and they also have a team of highly skilled experts that can provide a custom cable solution for almost any kind of specialized application.

## No comments:

## Post a Comment