How to get faster fiber-optic pipes through computation
Ars Technica 2015-08-20
The information age demands fat pipes. But making fat pipes is not always as easy as it sounds. Consider our current generation of fiber optic communications. Compared to microwave systems, where every symbol communicates something like one or two bytes of data, most current optical systems are limited to one to four bits per symbol.
This hasn’t mattered so much because many lasers, each with a different wavelength—called a channel—can be used on the same fiber, and the rate at which we send those bits is astonishingly high. Single channel capacities are way in excess of 40Gb/s range—40Gb/s was in testing the last time I taught a telecommunications course, and in 2012, various companies were testing 160Gb/s per channel. These incredible capacities, however, are achieved under very stringent conditions: the optical power must remain low and the optical properties of the fiber must be carefully controlled (something called dispersion management).
The increase from 40Gb/s to 160Gb/s also represented the switch from encoding one bit per symbol to four bits per symbol. However, these encoding systems require that there is considerably more optical power per channel, and this causes problems with the stringent conditions mentioned above. This has made increases beyond four bits per symbol difficult. Funnily enough, everyone has kind-of-sorta known how to solve the problem, but no one was willing to simply bite the bullet and do it. At least until now.