Optimized Quantum Algorithms: Breakthrough Toward Quantum Advantage

Researchers from the University of Bristol and quantum start-up, Phasecraft, have advanced quantum computing research, bringing practical hybrid quantum-classical computing one step closer.

The team, led by Bristol researcher and Phasecraft co-founder, Dr. Ashley Montanaro, has discovered algorithms and analysis which significantly lessen the quantum hardware capability needed to solve problems that go beyond the realm of classical computing, even supercomputers.

In the paper, published in Physical Review B, the team demonstrates how optimized quantum algorithms can solve instances of the notorious Fermi-Hubbard model on near-term hardware.

The Fermi-Hubbard model is of fundamental importance in condensed-matter physics as a model for strongly correlated materials and a route to understanding high-temperature superconductivity.

Finding the ground state of the Fermi-Hubbard model has been predicted to be one of the first applications of near-term quantum computers and one that offers a pathway to understanding and developing novel materials.

Variational Algorithm Performance in a Noisy Simulation
Performance of variational algorithms in a noisy simulation. Credit: A. Montanaro

Dr. Ashley Montanaro, research lead and co-founder of Phasecraft: “Quantum computing has critically important applications in materials science and other domains. Despite the major quantum hardware advances recently, we may still be several years from having the right software and hardware to solve meaningful problems with quantum computing. Our research focuses on algorithms and software optimizations to maximize the quantum hardware’s capacity, and bring quantum computing closer to reality.

“Near-term quantum hardware will have limited device and computation size. Phasecraft applied new theoretical ideas and numerical experiments to put together a very comprehensive study on different strategies for solving the Fermi-Hubbard model, zeroing in on strategies that are most likely to have the best results and impact in the near future.”

Lana Mineh, a PhD student in the School of Mathematics and the Centre for Doctoral Training in Quantum Engineering, who played a key role in the research, said, “The results suggest that optimizing over quantum circuits with a gate depth substantially less than a thousand could be sufficient to solve instances of the Fermi-Hubbard model beyond the capacity of current supercomputers. This new research shows significant promise for producing the ground state of the model on near-term quantum devices, improving on previous research findings by around a factor of 10.”

Physical Review B, published by the American Physical Society, is the top specialist journal in condensed-matter physics. The peer-reviewed research paper was also chosen as the Editors’ Suggestion and to appear in Physics magazine.

Google Sycamore Architecture Qubit Layout
Layout of qubits in Google’s Sycamore architecture. Credit: A. Montanaro

Andrew Childs, Professor in the Department of Computer Science and Institute for Advanced Computer Studies at the University of Maryland: “The Fermi-Hubbard model is a major challenge in condensed-matter physics, and the Phasecraft team has made impressive steps in showing how quantum computers could solve it. Their work suggests that surprisingly low-depth circuits could provide useful information about this model, making it more accessible to realistic quantum hardware.”

Hartmut Neven, Head of Quantum Artificial Intelligence Lab, Google: “Sooner or later, quantum computing is coming. Developing the algorithms and technology to power the first commercial applications of early quantum computing hardware is the toughest challenge facing the field, which few are willing to take on. We are proud to be partners with Phasecraft, a team that are developing advances in quantum software that could shorten that timeframe by years.”

Phasecraft Co-founder Dr. Toby Cubitt: “At Phasecraft, our team of leading quantum theorists have been researching and applying quantum theory for decades, leading some of the top global academic teams and research in the field. Today, Ashley and his team have demonstrated ways to get closer to achieving new possibilities that exist just beyond today’s technological bounds.”

Phasecraft has closed a record seed round for a quantum company in the UK with £3.7m in funding from private-sector VC investors, led by LocalGlobe with Episode1 along with previous investors. Former Songkick founder Ian Hogarth has also joined as board chair for Phasecraft. Phasecraft previously raised a £750,000 pre-seed round led by UCL Technology Fund with Parkwalk Advisors and London Co-investment Fund and has earned several grants facilitated by InnovateUK. Between equity funding and research grants, Phasecraft has raised more than £5.5m.

Dr. Toby Cubitt: “With new funding and support, we are able to continue our pioneering research and industry collaborations to develop the quantum computing industry and find useful applications faster.”

UK Scientists to Produce Low-Cost, High-Performance Ventilators

UK scientists have been awarded funding to develop a robust, low-cost ventilator to help patients in low and middle-income countries suffering from severe respiratory problems due to Covid-19.

Mechanical ventilation is a small but important part of the management of pandemic virus infections that affect the lungs, including SARS-CoV-1, SARS-CoV-2 (COVID-19), and influenza.

Ventilators are typically expensive to purchase and maintain, and need considerable training to use. Most also rely on the provision of high-flow oxygen and medically pure compressed air, which are not readily available in many countries around the world.

Affordable, reliable and easy to use
A team of researchers, co-ordinated by the Science and Technology Facilities Council’s (STFC) Daresbury Laboratory, aim to produce and test plans for the creation of an affordable, reliable, and easy-to-operate ventilator that does not rely so heavily on compressed gases and mains electricity supply.

It is anticipated that these plans will be used by a wide variety of manufacturing groups across the world, thereby reducing the need for expensive transportation and maintenance.

The Head of the Technology Department at STFC’s Daresbury Laboratory, Ian Lazarus, is the project lead. He said: “I am proud to be leading this team, in which we have brought together experts from medicine, science, engineering and knowledge transfer with a shared goal to make resilient high quality ventilators available in areas of the world which currently don’t have enough of them.

“We look forward to redeploying skills that normally underpin science research into this different and very necessary field, working with medical experts in both UK and Brazil. Together we hope to make a positive impact in the current fight against COVID-19 and afterwards in the treatment of other respiratory conditions in countries where ventilators are not as readily available as they are here in the UK.”

International expertise
As well as leadership from Daresbury Lab, teams across STFC will be working on this project, from the Technology Department, the Hartree Centre, ISIS Neutron and Muon Source and the Business and Innovation Department (BID).

STFC will also be working with international partners from the Federal University of Rio de Janeiro (Brazil) and CERN, as well as the University of Birmingham, the University of Liverpool and the Medical Devices Testing and Evaluation Centre (MDTEC).

The £760,000 funding for the prototypes has been awarded by UK Research and Innovation (UKRI) through the Global Challenges Research Fund (GCRF), which looks to support scientists to develop solutions to mitigate the short and long-term social, economic and health impacts of the pandemic.

Real-world applications
This project, known as HPLV (High Performance Low Cost Ventilator), builds on the original designs for the HEV (High Energy physics Ventilator).

The HEV was developed at CERN by a group of institutes from the LHCb collaboration, with guidance from local hospitals, an international team of medical experts and organizations such as the World Health Organisation. Professor Themis Bowcock, of the University of Liverpool instigated the train of events leading to HEV and HPLV. He is part of the LHCb experiment team at CERN and the original HEV team, as well as a member of the ‘CERN against Covid’ committee and the WHO Expert Respiratory Panel.

He said, “It is of enormous importance to us that the technology and software techniques we developed for fundamental physics at the Large Hadron Collider will be used to support the international community in the Covid-19 era. This shows that expertise of researchers across the whole UKRI portfolio play a part in tackling urgent and acute challenges faced by humanity.”

A common goal
The HEV prototype design, which was developed using research techniques routinely used at CERN, will now be re-engineered to make it ready for regulatory approval and for manufacture.

HEV team leader Paula Collins, part of the LHCb experiment at CERN and a CERN physicist, said: “HEV was born at CERN during the first lockdown thanks to a dedicated team of physicists and engineers who adapted their expertise to fight the pandemic.

“We are grateful to our international team of medical advisers who helped us to orient the HEV design towards the needs of low and middle-income countries, and to build a ventilator focused on quality and patient comfort.

“We warmly welcome the HPLV initiative, which will build upon the success of HEV and we look forward to working together with the outstanding HPLV team for our common humanitarian goal.”

A team effort
During the HPLV project, partners based in Brazil will identify difficulties encountered when ventilating patients and pass that information to the design team, while regulatory experts in the UK will also provide valuable guidance on this re-engineering activity.

The STFC BID team will work closely with their counterparts in CERN’s Knowledge Transfer group to identify industrial partners who will manufacture and supply the HPLV ventilators.

The project will run for 6 months from October 2020 to April 2021 with a small amount of funding in the following 12 months to support the transfer of the HPLV technology to industrial partners.

See Inside Living Cells in Greater Detail Using New Microscopy Technique

Experts in optical physics have developed a new way to see inside living cells in greater detail using existing microscopy technology and without needing to add stains or fluorescent dyes.

Since individual cells are almost translucent, microscope cameras must detect extremely subtle differences in the light passing through parts of the cell. Those differences are known as the phase of the light. Camera image sensors are limited by what amount of light phase difference they can detect, referred to as dynamic range.

“To see greater detail using the same image sensor, we must expand the dynamic range so that we can detect smaller phase changes of light,” said Associate Professor Takuro Ideguchi from the University of Tokyo Institute for Photon Science and Technology.

The research team developed a technique to take two exposures to measure large and small changes in light phase separately and then seamlessly connect them to create a highly detailed final image. They named their method adaptive dynamic range shift quantitative phase imaging (ADRIFT-QPI) and recently published their results in Light: Science & Applications.

Dynamic Range Expansion by ADRIFT QPI
Images of silica beads taken using conventional quantitative phase imaging (top) and a clearer image produced using a new ADRIFT-QPI microscopy method (bottom) developed by a research team at the University of Tokyo. The photos on the left are images of the optical phase and images on the right show the optical phase change due to the mid-infrared (molecular specific) light absorption by the silica beads. In this proof-of-concept demonstration, researchers calculated that they achieved approximately 7 times greater sensitivity by ADRIFT-QPI than that by conventional QPI. Credit: Image by Toda et al., CC-BY 4.0

“Our ADRIFT-QPI method needs no special laser, no special microscope or image sensors; we can use live cells, we don’t need any stains or fluorescence, and there is very little chance of phototoxicity,” said Ideguchi.

Phototoxicity refers to killing cells with light, which can become a problem with some other imaging techniques, such as fluorescence imaging.

Quantitative phase imaging sends a pulse of a flat sheet of light towards the cell, then measures the phase shift of the light waves after they pass through the cell. Computer analysis then reconstructs an image of the major structures inside the cell. Ideguchi and his collaborators have previously pioneered other methods to enhance quantitative phase microscopy.

Quantitative phase imaging is a powerful tool for examining individual cells because it allows researchers to make detailed measurements, like tracking the growth rate of a cell based on the shift in light waves. However, the quantitative aspect of the technique has low sensitivity because of the low saturation capacity of the image sensor, so tracking nanosized particles in and around cells is not possible with a conventional approach.

A standard image (top) taken using conventional quantitative phase imaging and a clearer image (bottom) produced using a new ADRIFT-QPI microscopy method developed by a research team at the University of Tokyo. The photos on the left are images of the optical phase and images on the right show the optical phase change due to the mid-infrared (molecular specific) light absorption mainly by protein. Blue arrow points towards the edge of the nucleus, white arrow points towards the nucleoli (a substructure inside the nucleus), and green arrows point towards other large particles. Credit: Image by Toda et al., CC-BY 4.0

The new ADRIFT-QPI method has overcome the dynamic range limitation of quantitative phase imaging. During ADRIFT-QPI, the camera takes two exposures and produces a final image that has seven times greater sensitivity than traditional quantitative phase microscopy images.

The first exposure is produced with conventional quantitative phase imaging – a flat sheet of light is pulsed towards the sample and the phase shifts of the light are measured after it passes through the sample. A computer image analysis program develops an image of the sample based on the first exposure then rapidly designs a sculpted wavefront of light that mirrors that image of the sample. A separate component called a wavefront shaping device then generates this “sculpture of light” with higher intensity light for stronger illumination and pulses it towards the sample for a second exposure.

If the first exposure produced an image that was a perfect representation of the sample, the custom-sculpted light waves of the second exposure would enter the sample at different phases, pass through the sample, then emerge as a flat sheet of light, causing the camera to see nothing but a dark image.

“This is the interesting thing: We kind of erase the sample’s image. We want to see almost nothing. We cancel out the large structures so that we can see the smaller ones in great detail,” Ideguchi explained.

In reality, the first exposure is imperfect, so the sculptured light waves emerge with subtle phase deviations.

The second exposure reveals tiny light phase differences that were “washed out” by larger differences in the first exposure. These remaining tiny light phase difference can be measured with increased sensitivity due to the stronger illumination used in the second exposure.

Additional computer analysis reconstructs a final image of the sample with an expanded dynamic range from the two measurement results. In proof-of-concept demonstrations, researchers estimate the ADRIFT-QPI produces images with seven times greater sensitivity than conventional quantitative phase imaging.

Ideguchi says that the true benefit of ADRIFT-QPI is its ability to see tiny particles in context of the whole living cell without needing any labels or stains.

“For example, small signals from nanoscale particles like viruses or particles moving around inside and outside a cell could be detected, which allows for simultaneous observation of their behavior and the cell’s state,” said Ideguchi.

Nanoscale Control of Desalination Membranes Could Lead to Cheaper Water Filtration

Producing clean water at a lower cost could be on the horizon after researchers from The University of Texas at Austin and Penn State solved a complex problem that has baffled scientists for decades, until now.

Desalination membranes remove salt and other chemicals from water, a process critical to the health of society, cleaning billions of gallons of water for agriculture, energy production and drinking. The idea seems simple — push salty water through and clean water comes out the other side — but it contains complex intricacies that scientists are still trying to understand.

The research team, in partnership with DuPont Water Solutions, solved an important aspect of this mystery, opening the door to reduce costs of clean water production. The researchers determined desalination membranes are inconsistent in density and mass distribution, which can hold back their performance. Uniform density at the nanoscale is the key to increasing how much clean water these membranes can create.

“Reverse osmosis membranes are widely used for cleaning water, but there’s still a lot we don’t know about them,” said Manish Kumar, an associate professor in the Department of Civil, Architectural and Environmental Engineering at UT Austin, who co-led the research. “We couldn’t really say how water moves through them, so all the improvements over the past 40 years have essentially been done in the dark.”

The findings were published on December 31, 2020, in Science.

The paper documents an increase in efficiency in the membranes tested by 30%-40%, meaning they can clean more water while using significantly less energy. That could lead to increased access to clean water and lower water bills for individual homes and large users alike.

Reverse osmosis membranes work by applying pressure to the salty feed solution on one side. The minerals stay there while the water passes through. Although more efficient than non-membrane desalination processes, it still takes a large amount of energy, the researchers said, and improving the efficiency of the membranes could reduce that burden.

Desalination Breakthrough
The density of filtration membranes, even at the atomic scale, can greatly affect how much clean water can be produced. Credit: Enrique Gomez/Penn State

“Fresh water management is becoming a crucial challenge throughout the world,” said Enrique Gomez, a professor of chemical engineering at Penn State who co-led the research. “Shortages, droughts — with increasing severe weather patterns, it is expected this problem will become even more significant. It’s critically important to have clean water availability, especially in low-resource areas.”

The National Science Foundation and DuPont, which makes numerous desalination products, funded the research. The seeds were planted when DuPont researchers found that thicker membranes were actually proving to be more permeable. This came as a surprise because the conventional knowledge was that thickness reduces how much water could flow through the membranes.

Kaitlin Brickey
Paper co-author Kaitlin Brickey, a Penn State graduate student in chemical engineering, stands in front of the scanning electron microscope that allowed researchers to examine how dense pockets in membranes could hinder efficient water filtration efforts. Credit: Tyler Henderson/Penn State

The team connected with Dow Water Solutions, which is now a part of DuPont, in 2015 at a “water summit” Kumar organized, and they were eager to solve this mystery. The research team, which also includes researchers from Iowa State University, developed 3D reconstructions of the nanoscale membrane structure using state-of-the-art electron microscopes at the Materials Characterization Lab of Penn State. They modeled the path water takes through these membranes to predict how efficiently water could be cleaned based on structure. Greg Foss of the Texas Advanced Computing Center helped visualize these simulations, and most of the calculations were performed on Stampede2, TACC’s supercomputer.

Scientists Believe US Embassy Staff and CIA Officers Were Hit With High-Power Microwaves – Here’s How the Weapons Work

The mystery ailment that has afflicted U.S. embassy staff and CIA officers off and on over the last four years in Cuba, China, Russia and other countries appears to have been caused by high-power microwaves, according to a report released by the National Academies. A committee of 19 experts in medicine and other fields concluded that directed, pulsed radiofrequency energy is the “most plausible mechanism” to explain the illness, dubbed Havana syndrome.

The report doesn’t clear up who targeted the embassies or why they were targeted. But the technology behind the suspected weapons is well understood and dates back to the Cold War arms race between the U.S. and the Soviet Union. High-power microwave weapons are generally designed to disable electronic equipment. But as the Havana syndrome reports show, these pulses of energy can harm people, as well.

As an electrical and computer engineer who designs and builds sources of high-power microwaves, I have spent decades studying the physics of these sources, including work with the U.S. Department of Defense. Directed energy microwave weapons convert energy from a power source – a wall plug in a lab or the engine on a military vehicle – into radiated electromagnetic energy and focus it on a target. The directed high-power microwaves damage equipment, particularly electronics, without killing nearby people.

Two good examples are Boeing’s Counter-electronics High-powered Microwave Advanced Missile Project (CHAMP), which is a high-power microwave source mounted in a missile, and Tactical High-power Operational Responder (THOR), which was recently developed by the Air Force Research Laboratory to knock out swarms of drones.

A news report about the U.S. Air Force’s high-power microwave anti-drone weapon THOR.

Cold War origins
These types of directed energy microwave devices came on the scene in the late 1960s in the U.S. and the Soviet Union. They were enabled by the development of pulsed power in the 1960s. Pulsed power generates short electrical pulses that have very high electrical power, meaning both high voltage – up to a few megavolts – and large electrical currents – tens of kiloamps. That’s more voltage than the highest-voltage long-distance power transmission lines, and about the amount of current in a lightning bolt.

Plasma physicists at the time realized that if you could generate, for example, a 1-megavolt electron beam with 10-kiloamp current, the result would be a beam power of 10 billion watts, or gigawatts. Converting 10% of that beam power into microwaves using standard microwave tube technology that dates back to the 1940s generates 1 gigawatt of microwaves. For comparison, the output power of today’s typical microwave ovens is around a thousand watts – a million times smaller.

The development of this technology led to a subset of the U.S.-Soviet arms race – a microwave power derby. When the Soviet Union collapsed in 1991, I and other American scientists gained access to Russian pulsed power accelerators, like the SINUS-6 that is still working in my lab. I had a fruitful decade of collaboration with my Russian colleagues, which swiftly ended following Vladimir Putin’s rise to power.

High-Power Microwave Generator
This high-power microwave generator built in the Soviet Union continues to operate in Edl Schamiloglu’s lab at the University of New Mexico. Credit: Edl Schamiloglu, University of New Mexico, CC BY-ND

Today, research in high-power microwaves continues in the U.S. and Russia but has exploded in China. I have visited labs in Russia since 1991 and labs in China since 2006, and the investment being made by China dwarfs activity in the U.S. and Russia. Dozens of countries now have active high-power microwave research programs.

Lots of power, little heat
Although these high-power microwave sources generate very high power levels, they tend to generate repeated short pulses. For example, the SINUS-6 in my lab produces an output pulse on the order of 10 nanoseconds, or billionths of a second. So even when generating 1 gigawatt of output power, a 10-nanosecond pulse has an energy content of only 10 joules. To put this in perspective, the average microwave oven in one second generates 1 kilojoule, or thousand joules of energy. It typically takes about 4 minutes to boil a cup of water, which corresponds to 240 kilojoules of energy.

This is why microwaves generated by these high-power microwave weapons don’t generate noticeable amounts of heat, let alone cause people to explode like baked potatoes in microwave ovens.

High power is important in these weapons because generating very high instantaneous power yields very high instantaneous electric fields, which scale as the square root of the power. It is these high electric fields that can disrupt electronics, which is why the Department of Defense is interested in these devices.

How it affects people
The National Academies report links high-power microwaves to impacts on people through the Frey effect. The human head acts as a receiving antenna for microwaves in the low gigahertz frequency range. Pulses of microwaves in these frequencies can cause people to hear sounds, which is one of the symptoms reported by the affected U.S. personnel. Other symptoms Havana syndrome sufferers have reported include headaches, nausea, hearing loss, lightheadedness and cognitive issues.

The report notes that electronic devices were not disrupted during the attacks, suggesting that the power levels needed for the Frey effect are lower than would be required for an attack on electronics. This would be consistent with a high-power microwave weapon located at some distance from the targets. Power decreases dramatically with distance through the inverse square law, which means one of these devices could produce a power level at the target that would be too low to affect electronics but that could induce the Frey effect.

The Russians and the Chinese certainly possess the capabilities of fielding high-power microwave sources like the ones that appear to have been used in Cuba and China. The truth of what actually happened to U.S. personnel in Cuba and China – and why – might remain a mystery, but the technology most likely involved comes from textbook physics, and the military powers of the world continue to develop and deploy it.