by Martin LaMonica  /  28 May 2014

Researchers in Japan have come up with a way of doing quantum cryptography that could overcome two of the technology’s big problems. The new protocol is designed to work with off-the-shelf equipment and use less bandwidth than existing methods. It’s just a mathematical proposal, but it could help make quantum key distribution more commercially viable. With an encrypted message, the sender and recipient share a key that unscrambles its contents. Ensuring that the key hasn’t been stolen is the problem. With quantum cryptography, the key is created at the sender and receiver by transmitting photons over fiber-optic lines. The polarity of a photon—a quantum property that says whether it is oscillating vertically or at angle—can be determined by the receiver and compared with a second “entangled” photon created at the same time. The polarity of the photons is translated into bits that make up a key to decrypt messages. With quantum key distribution, the security of the transmission is assured by the Heisenberg uncertainty principle. If an eavesdropper tries to intercept the key, it will change the state of the paired photons—an event that can be detected by the sender of the key.

In research published in Nature last week, the Japanese team describes a method for securing communications that doesn’t rely on the uncertainty principle and needs no regular measurement to see if the key’s been tampered with. With this technique, photons are sent over an optical fiber using ordinary lasers, rather than specialized equipment usually needed to create quantum keys. The laser emits a train of photons and a device called a phase modulator imparts a phase on them. The receiver splits the signal into two separate signals with a randomly generated delay between them. Then those two signals, which are oscillating waves, are superimposed and detected on the receiving end. The combined waves could be out of phase and cancel each out or they could be in phase and create a bigger wave.

The phase difference between pulses can then act as bits that can make up a key to decrypt the message. For example, pulses with the same phase are a bit value of zero, while pulses with a different phase are a bit value of one. When the receiver—who, by convention, is called Bob—detects a photon, he learns whether the superimposed pulses have the same or different phase. Then he tells the sender, called Alice, what the relevant pulse numbers are. Because the sender records all the pulses, she can determine the bit value based on what Bob tells her, explains co-author Masato Koashi from the University of Tokyo. In an email, Koashi from the University of Tokyo describes how the key is protected from theft by an intruder, called Eve:

One of the keys to securing the communication is to send a large number of optical pulses but they are very weak such that they amount to only a few photons in total. Hence, even if Eve waits for Bob to announce the numbers for two pulses and then measures Alice’s signal, the chances of Eve’s detecting any photon in the two relevant pulses are very low.

Another key is the fact that Bob generates the delay randomly. Eve may measure Alice’s signal immediately and learn the phases of a few pulses. Eve then tries to manipulate Bob’s announcement to fall on those pulses for which she has learned the phases. The random delay prevents such a manipulation.

By using this method, the protocol doesn’t require regular measurements of the transmissions to discover eavesdroppers as existing quantum key distribution systems do. That significantly reduces the amount of communications overhead required, which is important in situations where the communication channels are noisy, such as when trying to communicate with a satellite during a snowstorm, Koashi says. The work is still theoretical but an experiment could be done without specialized equipment. “What you need is just to use a conventional laser and a phase modulator, which is already used in digital optical communications,” he says. However, it’s still not clear that the performance of the system would be good enough, he adds.

There are already commercial quantum key distribution systems, although their use is fairly limited. Dedicated dark fiber lines are needed and the distance for sending secured data is limited to about 100 kilometers. But there continues to be a significant amount of work in extending the range of quantum cryptography and using shared fiber optic lines. According to Koashi, researchers in the field expressed surprise with theNature paper because the security is done by transmitting photons, yet Heisenberg’s uncertainty principle isn’t relevant. “For them, it’s quite a surprise that something different can be done to create secret communications,” he says.
Quantum Cryptography Done Over Shared Data Line
by Martin LaMonica  /  28 Apr 2014

Researchers have sent quantum keys over a “lit” fiber-optic network, a step towards using quantum cryptography on the networks businesses and institutions use every day. A group of U.K.-based research groups last week said the demonstration opens the door to more research that will make the technology more commercially viable. The researchers were from Toshiba Research Europe, BT, ADVA Optical Networking, and the U.K.’s National Physical Laboratory (NPL). In quantum cryptography, the keys to unlock the contents of communications are represented with photons. It starts with a laser that sends a pair of photons over a fiber-optic network. The polarization of photons—whether they’re oscillating horizontally or vertically, for example—can be detected by a receiver and read as bits, which are used to generate the same encryption key at both ends of the network connection. If an interloper attempts to intercept the keys to decrypt a message, the receiver will be able to detect a change, according to the laws of quantum mechanics. If that happens, the receiver can reject the keys and the message stays encrypted.

Until now, quantum key distribution (QKD) has been done over dark fiber, or unused optical fiber lines, which means that a separate fiber optic line is needed for transmitting other data. But dark fiber networks are not always available and are expensive. Being able to transmit quantum keys over a lit fiber network means that institutions and businesses will be able to run quantum cryptography over their existing networking infrastructure, the researchers said. Using techniques to filter out noise from the very weak quantum signals, we’ve shown that QKD can be operated on optical fibers installed in the ground and carrying conventional data signals,” said Andrew Shields from Toshiba Research Europe in a statement.

The National Physics Laboratory developed a series of measurements for identifying individual particles of light from the stream of photons sent over a fiber-optic line. That will allow the system to detect attempts to intercept the transmission of keys, which should improve customer confidence in quantum cryptography, said Alastair Sinclair from the National Physics Laboratory in a statement. The test was conducted over a live BT fiber link between its research campus in Suffolk and another BT site in Ipswich, U.K. In an interview with Nature, Toshiba’s Shields said the quantum key distribution was done alongside data transmitted at 40 gigabits per second, the fastest multiplexing of regular data with quantum keys to date. But he notes that implementing QKD in the “real world” is more challenging than a laboratory environment because there are environmental fluctuations that can cause data loss in fiber lines.

Another technical challenge facing widespread use of QKD is the distance keys can be sent. Light pulses sent over a fiber optic line fade, which means that key distribution can only be done at a distance of about 100 kilometers. (See Long-Distance Quantum Cryptography.) But as governments and companies seek out the most secure ways to send data, quantum cryptography could become an appealing option.

Computer-aided design drawing of the optical module on the satellite showing the telescope and gimbal (pivoted support). Credit: NASA.

First Broadband Wireless Connection…to the Moon / May 22, 2014

If future generations were to live and work on the moon or on a distant asteroid, they would probably want a broadband connection to communicate with home bases back on Earth. They may even want to watch their favorite Earth-based TV show. That may now be possible thanks to a team of researchers from the Massachusetts Institute of Technology’s (MIT) Lincoln Laboratory who, working with NASA last fall, demonstrated for the first time that a data communication technology exists that can provide space dwellers with the connectivity we all enjoy here on Earth, enabling large data transfers and even high-definition video streaming. At CLEO: 2014, being held June 8-13 in San Jose, California, USA, the team will present new details and the first comprehensive overview of the on-orbit performance of their record-shattering laser-based communication uplink between the moon and Earth, which beat the previous record transmission speed last fall by a factor of 4,800. Earlier reports have stated what the team accomplished, but have not provided the details of the implementation. “This will be the first time that we present both the implementation overview and how well it actually worked,” says Mark Stevens of MIT Lincoln Laboratory. “The on-orbit performance was excellent and close to what we’d predicted, giving us confidence that we have a good understanding of the underlying physics,” Stevens says.

The team made history last year when their Lunar Laser Communication Demonstration (LLCD) transmitted data over the 384,633 kilometers between the moon and Earth at a download rate of 622 megabits per second, faster than any radio frequency (RF) system. They also transmitted data from the Earth to the moon at 19.44 megabits per second, a factor of 4,800 times faster than the best RF uplink ever used. “Communicating at high data rates from Earth to the moon with laser beams is challenging because of the 400,000-kilometer distance spreading out the light beam,” Stevens says. “It’s doubly difficult going through the atmosphere, because turbulence can bend light—causing rapid fading or dropouts of the signal at the receiver.”

The ground terminal, with the sun reflecting off of the solar windows of the uplink telescopesCredit: Robert LaFon, NASA/GSFC.

To outmaneuver problems with fading of the signal over such a distance, the demonstration uses several techniques to achieve error-free performance over a wide range of optically challenging atmospheric conditions in both darkness and bright sunlight. A ground terminal at White Sands, New Mexico, uses four separate telescopes to send the uplink signal to the moon. Each telescope is about 6 inches in diameter and fed by a laser transmitter that sends information coded as pulses of invisible infrared light. The total transmitter power is the sum of the four separate transmitters, which results in 40 watts of power. The reason for the four telescopes is that each one transmits light through a different column of air that experiences different bending effects from the atmosphere, Stevens says. This increases the chance that at least one of the laser beams will interact with the receiver, which is mounted on a satellite orbiting the moon. This receiver uses a slightly narrower telescope to collect the light, which is then focused into an optical fiber similar to fibers used in terrestrial fiber optic networks. From there, the signal in the fiber is amplified about 30,000 times. A photodetector converts the pulses of light into electrical pulses that are in turn converted into data bit patterns that carry the transmitted message. Of the 40-watt signals sent by the transmitter, less than a billionth of a watt is received at the satellite—but that’s still about 10 times the signal necessary to achieve error-free communication, Stevens says. Their CLEO: 2014 presentation will also describe how the large margins in received signal level can allow the system to operate through partly transparent thin clouds in the Earth’s atmosphere, which the team views as a big bonus. “We demonstrated tolerance to medium-size cloud attenuations, as well as large atmospheric-turbulence-induced signal power variations, or fading, allowing error-free performance even with very small signal margins,” Stevens says. While the LLCD design is directly relevant for near-Earth missions and those out to Lagrange points – areas where the forces between rotating celestial bodies are balanced, making them a popular destination for satellites — the team predicts that it’s also extendable to deep-space missions to Mars and the outer planets.

Presentation SM4J.1, titled “Overview and On-orbit Performance of the Lunar Laser Communication Demonstration Uplink,” will take place Monday, June 9, at 4:00 p.m. in Meeting Room 212 A/C of the San Jose Convention Center.

About CLEO: With a distinguished history as the industry’s leading event on laser science, the Conference on Lasers and Electro-Optics (CLEO) is the premier international forum for scientific and technical optics, uniting the fields of lasers and opto-electronics by bringing together all aspects of laser technology, from basic research to industry applications. CLEO: Expo showcases the latest products and applications from more than 300 participating companies from around the world, providing hands-on demonstrations of the latest market innovations and applications. The Expo also offers valuable on-floor programming, including Market Focus and the Technology Transfer program. Sponsored by the American Physical Society’s (APS) Laser Science Division, IEEE Photonics Society and The Optical Society (OSA), CLEO provides the full range of critical developments in the field, showcasing the most significant milestones from laboratory to marketplace. With an unparalleled breadth and depth of coverage, CLEO connects all of the critical vertical markets in lasers and electro-optics. For more information, visit CLEO: 2014 takes place June 8 – 13 at the San Jose Convention Center.

Canadian team wants to take the cheap microsatellite route to uncrackable global communications
by Kim Krieger  /  6 May 2013

Missions scientist Ian D’Souza of Canadian satellite equipment firm Com Dev, wants to cover the planet with a swarm of microsatellites that will jump-start the quantum communications revolution. He just has to build one and get a launch date first. Satellites capable of performing quantum cryptography, a form of communication that is theoretically unhackable, don’t even exist outside of the lab yet, but researchers at the Institute for Quantum Computing (IQC), in Waterloo, Ont., Canada, are engineering the technology as you read this, and they say they could have a working prototype this year. Com Dev, in Cambridge, Ont., would package the system as an inexpensive microsatellite and send it into orbit as a secondary payload on someone else’s rocket. If it works, Com Dev could refine the design and soon have more ready to go up with the next available launch. If the microsatellite fails, the company wouldn’t lose a boatload of money and years of time on an expensive piece of space junk.

Teams in Europe and Asia are working on the same problem, though not necessarily with the same cheap-and-dirty game plan. Com Dev’s strategy is alluring but also risky. Known as the microspace approach, it would use commercially available, off-the-shelf technologies that have not been designed for space. Of course, that doesn’t mean Com Dev won’t rigorously test its quantum satellite before launching it. The charm is in using off-the-shelf parts, which makes it easier to get cutting-edge technology. Technologies designed specifically for space tend to be of an older generation. “It makes more sense to launch a low-cost satellite to prove the concept than to launch a very expensive satellite whose hardware works flawlessly—only to find out that the atmosphere does not allow quantum key distribution to work,” D’Souza says.

Quantum cryptography functions well in fiber, but a nascent quantum network in Vienna has shown that the photons that carry the encryption keys fade out after 200 kilometers or so. The signal should travel much farther in empty space, and a network of only six to eight satellites could cover the planet. But the air near the ground is turbulent, and researchers have so far been able to do delicate quantum communications tricks there only over a distance of about 140 km. “The problem with quantum experiments is that…you need to carefully test everything in order to be successful. Otherwise you see nothing,” says Fabio Sciarrino, a quantum optics physicist at Sapienza Università di Roma.

Sciarrino is not exaggerating the difficulty. Quantum cryptography satellites must be able to detect a single photon beamed from Earth against a background flooded with photons. The photon must be aimed precisely, then travel through the turbulent lower levels of Earth’s atmosphere. Not only must the satellite detect the photon, it must also measure a quantum property of the photon: its polarization. The polarization will reveal whether the photon represents a zero or a one. And the satellite must do this over and over again for a stream of individual polarized photons. The stream represents a key for encrypting a message. The satellite then transmits this key, photon by carefully aimed and polarized photon, to the recipient of the message. The message itself is sent via conventional wire and fiber. As long as it is encrypted with the satellite-distributed quantum key and the key is secure, the message is secure. And the sender and recipient know the key is secure, because if the photons had been intercepted or observed by a third party, they would be garbled owing to one of the basic tenets of quantum mechanics: If you measure it, you change it.

Measuring the photon’s polarization is one of the trickiest technical feats a quantum key distribution satellite must perform. The sender and satellite might both know that a polarization of zero degrees means zero, and a polarization of 90 degrees means one. But the satellite is moving and spinning with respect to Earth, so how can it know the correct frame of reference with which to measure the polarization? Sciarrino’s group at Sapienza has proposed a solution. They have been testing a way to control not only the photon’s polarization but also its orbital angular momentum. As a photon with orbital angular momentum travels, the electromagnetic field will appear helical with an empty donut hole in the middle. Such a shape looks the same, no matter what the observer’s frame of reference.

The Canadian group is working on a different way to solve the frame-of-reference problem, by using a measurement feedback loop. Using signals from the satellite, the sender would shift the wave plates, the equipment that controls the photons’ polarization, to compensate for the satellite’s changing orientation. China, the European Union, and Japan are also working to put experimental quantum key distribution satellites in orbit. China has said publicly that it plans to get a quantum key distribution satellite up by 2016. It would be an incredible coup for Canada to beat China. The Canadian space agency is enthusiastic about the idea, and has already funded part of the research stage of the project. But no funds are allocated to the satellite part of the project as yet, and the agency hasn’t gotten approval to proceed with the mission.

Who wants to wait for government funding to start the quantum revolution? D’Souza of Com Dev hopes the space agency can be involved. But if that doesn’t work out, Com Dev, IQC, and their partners could still make it happen. Com Dev successfully launched a global ship-tracking business, exactEarth, in 2008, using a single inexpensive nanosatellite. The nanosat was engineered to last just four months, but it’s still going strong. So it’s easy to see why D’Souza is so tempted by the idea of a cheap and fast quantum key distribution satellite. A single, streamlined microsatellite the size of a dishwasher could be fabricated and launched into orbit for tens of millions of dollars, instead of the hundreds of millions for a conventional satellite. If it works, a team of Canadian researchers and companies could fundamentally alter the way banks, governments, and businesses think about secure communications. When pressed, D’Souza admits it will probably take the Canadian team three years to do a “successful demonstration,” going as fast as they dare. So 2016 it is. We’re waiting.


Laser beams illuminate a small carbon rod and launch an asymmetric shock inside a chamber filled with argon gas. A grid is placed in the shock’s path, 1 centimetre from the target, resulting in turbulent flow. The shock and the turbulent flow is captured with the Schlieren imaging technique (blue-black hues) at 300 billionth of a second after the laser shot. The electron density predicted by computer simulations (blue-red hues) is superimposed.

Lasers create table-top supernova

Supernova explosions, triggered when the fuel within a star reignites or its core collapses, launch a detonation shock wave that sweeps through a few light years of space from the exploding star in just a few hundred years. But not all such explosions are alike and some, such as Cassiopeia A, show puzzling irregular shapes made of knots and twists. To investigate what may cause these peculiar shapes an international team led by Oxford University scientists (groups of Professor Gregori and Professor Bell in Atomic and Laser Physics, and Professor Schekochihin in Theoretical Physics) has devised a method of studying supernova explosions in the laboratory instead of observing them in space. ‘It may sound surprising that a table-top laboratory experiment that fits inside an average room can be used to study astrophysical objects that are light years across,’ said Professor Gianluca Gregori of Oxford University’s Department of Physics, who led the study published in Nature Physics. ‘In reality, the laws of physics are the same everywhere, and physical processes can be scaled from one to the other in the same way that waves in a bucket are comparable to waves in the ocean. So our experiments can complement observations of events such as the Cassiopeia A supernova explosion.’

The Cassiopeia A supernova explosion was first spotted about 300 years ago in the Cassiopeia constellation 11,000 light years away, its light has taken this long to reach us. The optical images of the explosion reveal irregular ‘knotty’ features and associated with these are intense radio and X-ray emissions. Whilst no one is sure what creates these phenomena one possibility is that the blast passes through a region of space that is filled with dense clumps or clouds of gas. To recreate a supernova explosion in the laboratory the team used the Vulcan laser facility at the UK’s Science and Technology Facilities Council’s Rutherford Appleton Lab. ‘Our team began by focusing three laser beams onto a carbon rod target, not much thicker than a strand of hair, in a low density gas-filled chamber,’ said Ms Jena Meinecke an Oxford University graduate student, who headed the experimental efforts.  The enormous amount of heat generated more than a few million degrees Celsius by the laser caused the rod to explode creating a blast that expanded out through the low density gas. In the experiments the dense gas clumps or gas clouds that surround an exploding star were simulated by introducing a plastic grid to disturb the shock front. ‘The experiment demonstrated that as the blast of the explosion passes through the grid it becomes irregular and turbulent just like the images from Cassiopeia,’ said Professor Gregori. ‘We found that the magnetic field is higher with the grid than without it. Since higher magnetic fields imply a more efficient generation of radio and X-ray photons, this result confirms that the idea that supernova explosions expand into uniformly distributed interstellar material isn’t always correct and it is consistent with both observations and numerical models of a shockwave passing through a ‘clumpy’ medium.’

‘Magnetic fields are ubiquitous in the universe,’ said Don Lamb, the Robert A. Millikan Distinguished Service Professor in Astronomy & Astrophysics at the University of Chicago. ‘We’re pretty sure that the fields didn’t exist at the beginning, at the Big Bang. So there’s this fundamental question: how did magnetic fields arise?’ These results are significant because they help to piece together a story for the creation and development of magnetic fields in our Universe, and provide the first experimental proof that turbulence amplifies magnetic fields in the tenuous interstellar plasma. The advance was made possible by the extraordinarily close cooperation between the teams performing the experiments and the computer simulations. ‘The experimentalists knew all the physical variables at a given point. They knew exactly the temperature, the density, the velocities,’ said Petros Tzeferacos of the University of Chicago, a study co-author. ‘This allows us to benchmark the code against something that we can see.’ Such benchmarking – called validation – shows that the simulations can reproduce the experimental data. The simulations consumed 20 million processing hours on supercomputers at Argonne National Laboratory, in the USA.

A report of the research, entitled ‘Turbulent amplification of magnetic fields in laboratory laser-produced shock waves’, by the team including researchers from the University of Oxford, the University of Chicago, ETH Zurich, the Queen’s University Belfast, the Science and Technology Facilities Council, the University of York, the University of Michigan, Ecole Polytechnique, Osaka University, the University of Edinburgh, the University of Strathclyde and the Lawrence Livermore National Laboratory is published in Nature Physics.

Funding for this research was provided by the European Research Council, the UK’s Science and Technology Facilities Council, and the US Department of Energy through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.