Physics World https://physicsworld.com/a/flagship-journal-reports-on-progress-in-physics-marks-90th-anniversary-with-two-day-celebration/ Tue, 10 Sep 2024 16:07:12 +0100 en-GB Copyright by IOP Publishing Ltd and individual contributors hourly 1 https://wordpress.org/?v=6.5.5 Physics is full of captivating stories, from ongoing endeavours to explain the cosmos to ingenious innovations that shape the world around us. In the Physics World Stories podcast, Andrew Glester talks to the people behind some of the most intriguing and inspiring scientific stories. Listen to the podcast to hear from a diverse mix of scientists, engineers, artists and other commentators. Find out more about the stories in this podcast by visiting the Physics World website. If you enjoy what you hear, then also check out the Physics World Weekly podcast, a science-news podcast presented by our award-winning science journalists. Physics World false episodic Physics World dens.milne@ioppublishing.org Copyright by IOP Publishing Ltd and individual contributors Copyright by IOP Publishing Ltd and individual contributors podcast Physics World Stories Podcast Physics World https://physicsworld.com/wp-content/uploads/2021/01/PW-podcast-logo-STORIES-resized.jpg https://physicsworld.com TV-G Monthly Flagship journal Reports on Progress in Physics marks 90th anniversary with two-day celebration https://physicsworld.com/a/flagship-journal-reports-on-progress-in-physics-marks-90th-anniversary-with-two-day-celebration/ Tue, 10 Sep 2024 16:05:02 +0000 https://physicsworld.com/?p=116670 A new future lies in store for Reports on Progress in Physics as the journal turns 90

The post Flagship journal <em>Reports on Progress in Physics</em> marks 90th anniversary with two-day celebration appeared first on Physics World.

]]>
When the British physicist Edward Andrade wrote a review paper on the structure of the atom in the first volume of the journal Reports on Progress in Physics (ROPP) in 1934, he faced a problem familiar to anyone seeking to summarize the latest developments in a field. So much exciting research had happened in atomic physics that Andrade was finding it hard to cram everything in. “It is obvious, in view of the appalling number of papers that have appeared,” he wrote, “that only a small fraction can receive reference.”

Review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature

Apologizing that “many elegant pieces of work have been deliberately omitted” due to a lack of space, Andrade pleaded that he had “honestly tried to maintain a just balance between the different schools [of thought]”. Nine decades on, Andrade’s struggles will be familiar to anyone has ever tried to write a review paper, especially of a fast-moving area of physics. Readers, however, appreciate the efforts authors put in because review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature.

Writing review papers also benefits authors because such articles are usually widely read and cited by other scientists – much more in fact than a paper containing new research findings. As a result, most review journals have an extraordinarily high “impact factor”, which is the yearly mean number of citations received by articles published in the last two years in the journal. ROPP, for example, has an impact factor of 19.0. While there are flaws with using impact factor to judge the quality of a journal, it’s still a well-respected metric in many parts of the world. And who wouldn’t want to appear in a journal with that much influence?

New dawn for ROPP

Celebrating its 90th anniversary this year, ROPP is the flagship journal of IOP Publishing, which also publishes Physics World. As a learned-society publisher, IOP Publishing does not have shareholders, with any financial surplus ploughed back into the Institute of Physics (IOP) to support everyone from physics students to physics teachers. In contrast to journals owned by commercial publishers, therefore, ROPP has the international physics community at its heart.

Over the last nine decades, ROPP has published over 2500 review papers. There have been more than 20 articles by Nobel-prize-winning physicists, including famous figures from the past such as Hans Bethe (stellar evolution), Lawrence Bragg (protein crystallography) and Abdus Salam (field theory). More recently, ROPP has published papers by still-active Nobel laureates including Konstantin Novoselov (2D materials), Ferenc Krausz (attosecond physics) and Isamu Akasaki (blue LEDS) – see the box below for a full list.

Subhir Sachdev

But the journal isn’t resting on its laurels. ROPP has recently started accepting articles containing new scientific findings for the first time, with the plan being to publish 150–200 very-high-quality primary-research papers each year. They will be in addition to the usual output of 50 or so review papers, most of which will still be commissioned by ROPP’s active editorial board. IOP Publishing hopes the move will cement the journal’s place at the pinnacle of its publishing portfolio.

“ROPP will continue as before,” says Subir Sachdev, a condensed-matter physicist from Harvard University, who has been editor-in-chief of the journal since 2022. “There’s no change to the review format, but what we’re doing is really more of an expansion. We’re adding a new section containing original research articles.” The journal is also offering an open-access option for the first time, thereby increasing the impact of the work. In addition, authors have the option to submit their papers for “double anonymous” and transparent peer review.

Maintaining high standards

Those two new initiatives – publishing primary research and offering an open-access option – are probably the biggest changes in the journal’s 90-year history. But Sachdev is confident the journal can cope. “Of course, we want to maintain our high standards,” he says. “ROPP has over the years acquired a strong reputation for very-high-quality articles. With the strong editorial board and the support we have from referees, we hope we will be able to maintain that.”

Early signs are promising. Among the first primary-research papers in ROPP are CERN’s measurement of the speed of sound in a quark–gluon plasma (87 077801), a study into flaws in the Earth’s gravitational field (87 078301), and an investigation into whether supersymmetry could be seen in 2D materials (10.1088/1361-6633/ad77f0). A further paper looks into creating an overarching equation of state for liquids based on phonon theory (87 098001).

The idea is to publish a relatively small number of papers but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing

David Gevaux

David Gevaux, ROPP’s chief editor, who is in charge of the day-to-day running of the journal, is pleased with the quality and variety of primary research published so far. “The idea is to publish a relatively small number of papers – no more than 200 max – but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing,” he says. “Our first papers have covered a broad range of physics, from condensed matter to astronomy.”

Another benefit of ROPP only publishing a select number of papers is that each article can have, as Gevaux explains, “a little bit more love” put into it. “Traditionally, publishers were all about printing journals and sending them around the world – it was all about distribution,” he says. “But with the Internet, everything’s immediately available and researchers almost have too many papers to trawl through. As a flagship journal, ROPP gives its published authors extra visibility, potentially through a press release or coverage in Physics World.”

Nobel laureates who have published in ROPP

Since its launch in 1934, Reports on Progress in Physics has published papers by numerous top scientists, including more than 20 current or future Nobel-prize-winning physicists. A selection of those papers written or co-authored by Nobel laureates over the journal’s first 90 years is given chronologically below. For brevity, papers by multiple authors list only the contributing Nobel winner.

Nevill Mott 1938 “Recent theories of the liquid state” (5 46) and 1939 “Reactions in solids” (6 186)
Hans Bethe 1939 “The physics of stellar interiors and stellar evolution” (6 1)
Max Born 1942 “Theoretical investigations on the relation between crystal dynamics and x-ray scattering” (9 294)
Martin Ryle 1950 “Radio astronomy” (13 184)
Willis Lamb 1951 “Anomalous fine structure of hydrogen and singly ionized helium” (14 19)
Abdus Salam 1955 “A survey of field theory” (18 423)
Alexei Abrikosov 1959 “The theory of a fermi liquid” (22 329)
David Thouless 1964 “Green functions in low-energy nuclear physics” (27 53)
Lawrence Bragg 1965 “First stages in the x-ray analysis of proteins” (28 1)
Melvin Schwartz 1965 “Neutrino physics” (28 61)
Pierre-Gilles de Gennes 1969 “Some conformation problems for long macromolecules” (32 187)
David Gabor 1969 Progress in holography” (32 395)
John Clauser 1978 “Bell’s theorem. Experimental tests and implications” (41 1881)
Norman Ramsey 1982 “Electric-dipole moments of elementary particles” (45 95)
Martin Perl 1992 “The tau lepton” (55 653)
Charles Townes 1994 “The nucleus of our galaxy” (57 417)
Pierre Agostini 2004 “The physics of attosecond light pulses” (67 813)
Takaaki Kajita 2006 “Discovery of neutrino oscillations” (69 1607)
Konstantin Novoselov 2011 “New directions in science and technology: two-dimensional crystals” (7 082501)
John Michael Kosterlitz 2016 “Physics: a review of key issues” (2016 79 026001)
Anthony Leggett 2016 “Liquid helium-3: a strongly correlated but well understood Fermi liquid” (79 054501)
Ferenc Krausz 2017 “Attosecond physics at the nanoscale” (80 054401)
Isamu Akasaki 2018 GaN-based vertical-cavity surface-emitting lasers with AlInN/GaN distributed Bragg reflectors” (82 012502)

An event for the community

As another reminder of its place in the physics community, ROPP is hosting a two-day event at the IOP’s headquarters in London and online. Taking place on 9–10 October 2024, the hybrid event will present the latest cutting-edge condensed-matter research, from fundamental work to applications in superconductivity, topological insulators, superfluids, spintronics and beyond. Confirmed speakers at Progress in Physics 2024 include Piers Coleman (Rutgers University), Susannah Speller (University of Oxford), Nandini Trivedi (Ohio State University) and many more.

artist's impression of a superconductive cube levitiating

“We’re taking the journal out into the community,” says Gevaux. “IOP Publishing is very heavily associated with the IOP and of course the IOP has a large membership of physicists in the UK, Ireland and beyond. With the meeting, the idea is to bring that community and the journal together. This first meeting will focus on condensed-matter physics, with some of the ROPP board members giving plenary talks along with lectures from invited, external scientists and a poster session too.”

Longer-term, IOP Publishing plans to put ROPP at the top of a wider series of journals under the “Progress in” brand. The first of those journals is Progress in Energy, which was launched in 2019 and – like ROPP – has now also expanded its remit to included primary- research papers. Other, similar spin-off journals in different topic areas will be launched over the next few years, giving IOP Publishing what it hopes is a series of journals to match the best in the world.

For Sachdev, publishing with ROPP is all about having “the stamp of approval” from the academic community. “So if you think your field is now reached a point where a scholarly assessment of recent advances is called for, then please consider ROPP,” he says. “We have a very strong editorial board to help you produce a high-quality, impactful article, now with the option of open access and publishing really high-quality primary research papers too.”

The post Flagship journal <em>Reports on Progress in Physics</em> marks 90th anniversary with two-day celebration appeared first on Physics World.

]]>
Analysis A new future lies in store for Reports on Progress in Physics as the journal turns 90 https://physicsworld.com/wp-content/uploads/2024/09/superconductor-cube-levitiating-1663588811-iStock_koto_feja.jpg
Quantum growth drives investment in diverse skillsets https://physicsworld.com/a/quantum-growth-drives-investment-in-diverse-skillsets/ Tue, 10 Sep 2024 13:20:05 +0000 https://physicsworld.com/?p=116590 Scientific equipment makers are building a diverse workforce to feed into expanding markets in quantum technologies and low-temperature materials measurement

The post Quantum growth drives investment in diverse skillsets appeared first on Physics World.

]]>
The meteoric rise of quantum technologies from research curiosity to commercial reality is creating all the right conditions for a future skills shortage, while the ongoing pursuit of novel materials continues to drive demand for specialist scientists and engineers. Within the quantum sector alone, headline figures from McKinsey & Company suggest that less than half of available quantum jobs will be filled by 2025, with global demand being driven by the burgeoning start-up sector as well as enterprise firms that are assembling their own teams to explore the potential of quantum technologies for transforming their businesses.

While such topline numbers focus on the expertise that will be needed to design, build and operate quantum systems, a myriad of other skilled professionals will be needed to enable the quantum sector to grow and thrive. One case in point is the diverse workforce of systems engineers, measurement scientists, service engineers and maintenance technicians who will be tasked with building and installing the highly specialized equipment and instrumentation that is needed to operate and monitor quantum systems.

“Quantum is an incredibly exciting space right now, and we need to prepare for the time when it really takes off and explodes,” says Matt Martin, Managing Director of Oxford Instruments NanoScience, a UK-based company that manufactures high-performance cryogenics systems and superconducting magnets. “But for equipment makers like us the challenge is not just about quantum, since we are also seeing increased demand from both academia and industry for emerging applications in scientific measurement and condensed-matter physics.”

Martin points out that Oxford Instruments already works hard to identify and nurture new talent. Within the UK the company has for many years sponsored doctoral students to foster a deeper understanding of physics in the ultracold regime, and it also offers placements to undergraduates to spark an early interest in the technology space. The firm is also dialled into the country’s apprenticeship scheme, which offers an effective way to train young people in the engineering skills needed to manufacture and maintain complex scientific instruments.

Despite these initiatives, Martin acknowledges that NanoScience faces the same challenges as other organizations when it comes to recruiting high-calibre technical talent. In the past, he says, a skilled scientist would have been involved in all stages of the development process, but now the complexity of the systems and depth of focus required to drive innovation across multiple areas of science and engineering has led to the need for greater specialization. While collaboration with partners and sister companies can help, the onus remains on each business to develop a core multidisciplinary team.

Building ultracold and measurement expertise

The key challenge for companies like Oxford Nanoscience is finding physicists and engineers who can create the ultracold environments that are needed to study both quantum behaviour and the properties of novel materials. Compounding that issue is the growing trend towards providing the scientific community with more automated solutions, which has made it much easier for researchers to configure and conduct experiments at ultralow temperatures.

Harriet van der Vliet

“In the past PhD students might have spent a significant amount of time building their experiments and the hardware needed for their measurements,” explains Martin. “With today’s push-button solutions they can focus more on the science, but that changes their knowledge because there’s no need for them to understand what’s inside the box. Today’s measurement scientists are increasingly skilled in Python and integration, but perhaps less so in hardware.”

Developing such comprehensive solutions demands a broader range of technical specialists, such as software programmers and systems engineers, that are in short supply across all technology-focused industries. With many other enticing sectors vying for their attention, such as the green economy, energy and life sciences, and the rise of AI-enabled robotics, Martin understands the importance of inspiring young people to devote their energies to the technologies that underpin the quantum ecosystem. “We’ve got to be able to tell our story, to show why this new and emerging market is so exciting,” he says. “We want them to know that they could be part of something that will transform the future.”

To raise that awareness Oxford Instruments has been working to establish a series of applications centres in Japan, the US and the UK. One focus for the centres will be to provide training that helps users to get to the most out of the company’s instruments, particularly for those without direct experience of building and configuring an ultracold system. But another key objective is to expose university-level students to research-grade technology, which in turn should help to highlight future career options within the instrumentation sector.

To build on this initiative Oxford Instruments is now actively discussing opportunities to collaborate with other companies on skills development and training in the US. “We all want to provide some hands-on learning for students as they progress through their university education, and we all want to find ways to work with government programmes to stimulate this training,” says Martin. “It’s better for us to work together to deliver something more substantial rather than doing things in a piecemeal way.”

That collaboration is likely to centre around an initiative launched by US firm Quantum Design back in 2015. Under the scheme, now badged Discovery Teaching Labs, the company has donated one of its commercial systems for low-temperature material analysis, the PPMS VersaLab, to several university departments in the US. As part of the initiative the course professors are also asked to create experimental modules that enable undergraduate students to use this state-of-the-art technology to explore key concepts in condensed-matter physics.

“Our initial goal was to partner with universities to develop a teaching curriculum that uses hands-on learning to inspire students to become more interested in physics,” says Quantum Design’s Barak Green, who has been a passionate advocate for the scheme. “By enabling students to become confident with using these advanced scientific instruments, we have also found that we have equipped them with vital technical skills that can open up new career paths for them.”

One of the most successful partnerships has been with California State University San Marcos (CSUSM), a small college that mainly attracts students from communities with no prior tradition of pursuing a university education. “There is no way that the students at CSUSM would have been able to access to this type of equipment in their undergraduate training, but now they have a year-long experimental programme that enhances their scientific learning and makes them much more comfortable with using such an advanced system,” says Green. “Many of these students can’t afford to stay in school to study for a PhD, and this programme has given them the knowledge and experience they need to get a good job.”

California State University San Marcos (CSUSM)

Indeed, Quantum Design has already hired around 20 students from CSUSM and other local programmes. “We didn’t start the initiative with that in mind, but over the years we discovered that we had all these highly skilled people who could come and work for us,” Green continues. “Students who only do theory are often very nervous around these machines, but the CSUSM graduates bring a whole extra layer of experience and know-how. Not everyone needs to have a PhD in quantum physics, we also need people who can go into the workforce and build the systems that the scientists rely on.”

This overwhelming success has given greater impetus to the programme, with Quantum Design now seeking to bring in other partners to extend its reach and impact. LakeShore Cryotronics, a long-time collaborator that designs and builds low-temperature measurement systems that can be integrated into the VersaLab, was the first company to make the commitment. In 2023 the US-based firm donated one of its M91 FastHall measurement platforms to join the VersaLab already installed at CSUSM, and the two partners are now working together to establish an undergraduate teaching lab at Stony Brook University in New York.

“It’s an opportunity for like-minded scientific companies to give something back to the community, since most of our products are not affordable for undergraduate programmes,” says LakeShore’s Chuck Cimino, who has now joined the board of advisors for the Discovery Teaching Labs programme. “Putting world-class equipment into the hands of students can influence their decisions to continue in the field, and in the long term will help to build a future workforce of skilled scientists and engineers.”

Conversations with other equipment makers at the 2024 APS March Meeting also generated significant interest, potentially paving the way for Oxford Instruments to join the scheme. “It’s a great model to build on, and we are now working to see how we might be able to commit some of our instruments to those training centres,” says Martin, who points out that the company’s Proteox S platform offers the ideal entry-level system for teaching students how to manage a cold space for experiments with qubits and condensed-matter systems. “We’ve developed a lot of training on the hardware and the physicality of how the systems work, and in that spirit of sharing there’s lots of useful things we could do.”

While those discussions continue, Martin is also looking to a future when quantum-powered processors become a practical reality in compute-intensive settings such as data centres. “At that point there will be huge demand for ultracold systems that are capable of hosting and operating large-scale quantum computers, and we will suddenly need lots of people who can install and service those sorts of systems,” he says. “We are already thinking about ways to set up training centres to develop that future workforce, which will primarily be focused around service engineers and maintenance technicians.”

Martin believes that partnering with government labs could offer a solution, particularly in the US where various initiatives are already in place to teach technical skills to college-level students. “It’s about taking that forward view,” he says. “We have already built a product that can be used for training purposes, and we have started discussions with US government agencies to explore how we could work together to build the workforce that will be needed to support the big industrial players.”

The post Quantum growth drives investment in diverse skillsets appeared first on Physics World.

]]>
Analysis Scientific equipment makers are building a diverse workforce to feed into expanding markets in quantum technologies and low-temperature materials measurement https://physicsworld.com/wp-content/uploads/2024/09/web-Matt-Martin-headshot.jpg
Quantum brainwave: using wearable quantum technology to study cognitive development https://physicsworld.com/a/quantum-brainwave-using-wearable-quantum-technology-to-study-cognitive-development/ Tue, 10 Sep 2024 10:00:04 +0000 https://physicsworld.com/?p=116530 Margot Taylor and David Woolger explain to Physics World why quantum-sensing technology is a game-changer for studying children’s brains

The post Quantum brainwave: using wearable quantum technology to study cognitive development appeared first on Physics World.

]]>
Though she isn’t a physicist or an engineer, Margot Taylor has spent much of her career studying electrical circuits. As the director of functional neuroimaging at the Hospital for Sick Children in Toronto, Canada, Taylor has dedicated her research to the most complex electrochemical device on the planet – the human brain.

Taylor uses various brain imaging techniques including MRI to understand cognitive development in children. One of her current projects uses a novel quantum sensing technology to map electrical brain activity. Magnetoencephalography with optically pumped magnetometry (OPM-MEG) is a wearable technology that uses quantum spins to localize electrical impulses coming from different regions of the brain.

Physics World’s Hamish Johnston caught up with Taylor to discover why OPM-MEG could be a breakthrough for studying children, and how she’s using it to understand the differences between autistic and non-autistic people.

The OPM-MEG helmets Taylor uses in this research were developed by Cerca Magnetics, a company founded in 2020 as a spin-out from the University of Nottingham‘s Sir Peter Mansfield Imaging Centre in the UK. Johnston also spoke to Cerca’s chief executive David Woolger, who explained how the technology works and what other applications they are developing.

Margot Taylor: understanding the brain

What is magnetoencephalography, and how is it used in medicine?

Magnetoencephalography (MEG) is the most sensitive non-invasive means we have of assessing brain function. Specifically, the technique gives us information about electrical activity in the brain. It doesn’t give us any information about the structure of the brain, but the disorders that I’m interested in are disorders of brain function, rather than disorders of brain structure. There are some other techniques, but MEG gives us amazing temporal and spatial resolution, which makes it very valuable.

So you’re measuring electrical signals. Does that mean that the brain is essentially an electrical device?

Indeed, they are hugely complex, electrical devices. Technically it’s electrochemical, but we are measuring the electrical signals that are the product of the electrochemical reactions in the brain.

When you perform MEG, how do you know where that signal’s coming from?

We usually get a structural MRI as well, and then we have very good source localization approaches so that we can tell exactly where in the brain different signals are coming from. We can also get information about how the signals are connecting with each other, the interactions among different brain regions, and the timing of those interactions.

Three complex-looking helmets on shelves next to a fun child-friendly mural

Why does quantum MEG make it easier to do brain scans on children?

The quantum technology is called optically pumped magnetometry (OPM) and it’s a wearable system, where the sensors are placed in a helmet. This means there is allowed movement because the helmet moves with the child. We’re able to record brain signals in very young children because they can move or sit on their parents’ laps, they don’t have to be lying perfectly still.

Conventional MEG uses cryogenic technology and is typically one size fits all. It’s designed for an adult male head and if you put in a small child, their head is a long way from the sensors. With OPM, however, the helmet can be adapted for different sized heads. We have little tiny helmets up to bigger helmets. This is a game changer in terms of recording signals in little children.

Can you tell us more about the study you’re leading at the Hospital for Sick Children in Toronto using a quantum MEG system from the UK’s Cerca Magnetics?

We are looking at early brain function in autistic and non-autistic children. Autism is usually diagnosed by about three years of age, although sometimes it’s not diagnosed until they’re older. But if a child could be diagnosed with autism earlier, then interventions could start earlier. And so we’re looking at autistic and non-autistic children as well as children that have a high likelihood of being autistic to see if we can get brain signals that will predict whether they will go on to get a diagnosis or not.

How do the responses you measure using quantum MEG differ between autistic and non-autistic people, or those with a high likelihood of developing autism?

We don’t have that data yet because we’re looking at the children who have a high likelihood of being autistic, so we have to wait until they grow up and for another year or so to see if they get a diagnosis. For the children who do have a diagnosis of autism already, it seems like the responses are atypical, but we haven’t fully analysed that data. We think that there is a signal there that we’ll be able to report in the foreseeable future, but we have only tested 32 autistic children so far, and we’d like to get more data before we publish.

A woman sits with her back to the camera wearing a helmet covered with electronics. Two more women stand either side

Do you have any preliminary results or published papers based on this data yet?

We’re still analysing data. We’re seeing beautiful, age-related changes in our cohort of non-autistic children. Because nobody has been able to do these studies before, we have to establish the foundational datasets with non-autistic children before we can compare it to autistic children or children who have a high likelihood of being autistic. And those will be published very shortly.

Are you using the quantum MEG system for anything else at the moment?

With the OPM system, we’re also setting up studies looking at children with epilepsy. We want to compare the OPM technology with the cryogenic MEG and other imaging technologies and we’re working with our colleagues to do that. We’re also looking at children who have a known genetic disorder to see if they have brain signals that predict whether they will also go on to experience a neurodevelopmental disorder. We’re also looking at children who are born to mothers with HIV to see if we can get an indication of what is happening in their brains that may affect their later development.

David Woolger: expanding applications

Can you give us a brief description of Cerca Magnetics’ technology and how it works?

When a neuron fires, you get an electrical current and a corresponding magnetic field. Our technology uses optically pumped magnetometers (OPMs), which are very sensitive to magnetic fields. Effectively, we’re sensing magnetic fields 500 million times lower than the Earth’s magnetic field.

To enable us to do that, as well as the quantum sensors, we need to shield against the Earth’s magnetic field, so we do this in a shielded environment with both active and passive shielding. We are then able to measure the magnetic fields from the brain, which we can use to understand functionally what’s going on in that area.

Are there any other applications for this technology beyond your work with Margot Taylor?

There’s a vast number of applications within the field of brain health. For example, we’re working with a team in Oxford at the moment, looking at dementia. So that’s at the other end of the life cycle, studying ways to identify the disease much earlier. If you can do that you can potentially start treatment with drugs or other interventions earlier.

Outside brain health, there are a number of groups who are using this quantum technology in other areas of medical science. One group in Arkansas is looking at foetal imaging during pregnancy, using it to see much more clearly than has previously been possible.

There’s another group in London looking at spinal imaging using OPM. Concussion is another potential application of these sensors, for sports or military injuries. There’s a vast range of medical-imaging applications that can be done with these sensors.

Have you looked at non-medical applications?

Cerca is very much a medical-imaging company, but I am aware of other applications of the technology. For example, applications with car batteries have potential to be a big market. When they make car batteries, there’s a lot of electrochemistry that goes into the cells. If you can image those processes during production, you can effectively optimize that production cycle, and therefore reduce the costs of the batteries. This has a real potential benefit for use in electric cars.

What’s next for Cerca Magnetics’ technology?

We are in a good position in that we’ve been able to deliver our initial systems to the research market and actually earn revenue. We have made a profit every year since we started trading. We have then reinvested that profit back into further development. For example, we are looking at scanning two people at once, looking at other techniques that will continue to develop the product, and most importantly, working on medical device approval. At the moment, our system is only sold to research institutes, but we believe that if the product were available in every hospital and every doctor’s surgery, it could have an incredible societal impact across the human lifespan.

Magnetoencephalography with optically pumped magnetometers

Schematic showing the working principle behind optically pumped magnetometry

Like any electrical current, signals transmitted by neurons in the brain generate magnetic fields. Magnetoencephalography (MEG) is an imaging technique that detects these signals and locates them in the brain. MEG has been used to plan brain surgery to treat epilepsy. It is also being developed as a diagnostic tool for disorders including schizophrenia and Alzheimer’s disease.

MEG traditionally uses superconducting quantum interference devices (SQUIDs), which are sensitive to very small magnetic fields. However, SQUIDs must be cryogenically cooled, which makes the technology bulky and immobile. Magnetoencephalography with optically pumped magnetometers (OPM-MEG) is an alternative technology that operates at room temperature. Optically pumped magnetometers (OPMs) are small quantum devices that can be integrated into a helmet, which is an advantage for imaging children’s brains.

The key components of an OPM device are a cloud of alkali atoms (generally rubidium), a laser and a photodetector. Initially, the spins of the atoms point in random directions (top row in figure), but applying a polarized laser of the correct frequency aligns the spins along the direction of the light (middle row in figure). When the atoms are in this state, they are transparent to the laser so the signal reaching the photodetector is at a maximum.

However, in the presence of a magnetic field, such as that from a brain wave, the spins of the atoms are perturbed and they are no longer aligned with the laser (bottom row in figure). The atoms can now absorb some of the laser light, which reduces the signal reaching the photodetector.

In OPM-MEG, these devices are placed around the patient’s head and integrated into a helmet. By measuring the signal from the devices and combining this with structural images and computer modelling, it’s possible to work out where in the brain the signal came from. This can be used to understand how electrical activity in different brain regions is linked to development, brain disorders and neurodivergence.

Katherine Skipper

The post Quantum brainwave: using wearable quantum technology to study cognitive development appeared first on Physics World.

]]>
Interview Margot Taylor and David Woolger explain to Physics World why quantum-sensing technology is a game-changer for studying children’s brains https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Cerca-frontis.jpg 1
Electro-active material ‘learns’ to play Pong https://physicsworld.com/a/electro-active-material-learns-to-play-pong/ Tue, 10 Sep 2024 08:45:39 +0000 https://physicsworld.com/?p=116611 Memory-like behaviour emerges in a polymer gel

The post Electro-active material ‘learns’ to play Pong appeared first on Physics World.

]]>
An electro-active polymer hydrogel can be made to “memorize” experiences in the same way as biological neurons do, say researchers at the University of Reading, UK. The team demonstrated this finding by showing that when the hydrogel is configured to play the classic video game Pong, it improves its performance over time. While it would be simplistic to say that the hydrogel truly learns like humans and other sentient beings, the researchers say their study has implications for studies of artificial neural networks. It also raises questions about how “simple” such a system can actually be, if it is capable of such complex behaviour.

Artificial neural networks are machine-learning algorithms that are configured to mimic structures found in biological neural networks (BNNs) such as human brains. While these forms of artificial intelligence (AI) can solve problems through trial and error without being explicitly programmed with pre-defined rules, they are not generally regarded as being adaptive, as BNNs are.

Playing Pong with neurons

In a previous study, researchers led by neuroscientist Karl Friston of University College London, UK and Brett Kagan of Cortical Labs in Melbourne, Australia, integrated a BNN with computing hardware by growing a large cluster of human neurons on a silicon chip. They then connected this chip to a computer programmed to play a version of Pong, a table-tennis-like game that originally involved a player and the computer bouncing an electronic ball between two computerized paddles. In this case, however, the researchers simplified the game so that there was only a single paddle on one side of the virtual table.

To find out whether this paddle had contacted the ball, Friston, Kagan and colleagues transmitted electrical signals to the neuronal network via the chip. At first, the neurons did not play Pong very well, but over time, they hit the ball more frequently and made more consecutive hits, allowing for longer rallies.

In this earlier work, the researchers described the neurons as being able to “learn” the game thanks to the concept of free energy as defined by Friston in 2010. He argued that neurons endeavour to minimize free energy, and therefore “choose” the option that allows them to do this most efficiently.

An even simpler version

Inspired by this feat and by the technique employed, the Reading researchers wondered whether such an emergent memory function could be generated in media that were even simpler than neurons. For their experiments, they chose to study a hydrogel (a complex polymer that jellifies when hydrated) that contains free-floating ions. These ions make the polymer electroactive, meaning that its behaviour is influenced by an applied electric field. As the ions move, they draw water with them, causing the gel to swell in the area where the electric field is applied.

The time it takes for the hydrogel to swell is much greater than the time it takes to de-swell, explains team member Vincent Strong. “This means there is a form of hysteresis in the ion motion because each consecutive stimulation moves the ions less and less as they gather,” says Strong, a robotics engineer at Reading and the first author of a paper in Cell Reports Physical Science on the new research. “This acts as a form of memory since the result of each stimulation on the ion’s motion is directly influenced by previous stimulations and ion motion.”

This form of memory allows the hydrogel to build up experience about how the ball moves in Pong, and thus to move its paddle with greater accuracy, he tells Physics World. “The ions within the gel move in a way that maps a memory of the ball’s motion not just at any given point in time but over the course of the entire game.”

The researchers argue that their hydrogel represents a different type of “intelligence”, and one that could be used to develop algorithms that are simpler than existing AI algorithms, most of which are derived from neural networks.

“We see this work as an example of how a much simpler system, in the form of an electro-active polymer hydrogel, can perform similar complex tasks to biological neural networks,” Strong says. “We hope to apply this as a stepping stone to finding the minimum system required for such tasks that require memory and improvement over time, looking both into other active materials and tasks that could provide further insight.

“We’ve shown that memory is emergent within the hydrogels, but the next step is to see whether we can also show specifically that learning is occurring.”

The post Electro-active material ‘learns’ to play Pong appeared first on Physics World.

]]>
Research update Memory-like behaviour emerges in a polymer gel https://physicsworld.com/wp-content/uploads/2024/09/hydrogel-pong.jpg
Fusion’s public-relations drive is obscuring the challenges that lie ahead https://physicsworld.com/a/fusions-public-relations-drive-is-obscuring-the-challenges-that-lie-ahead/ Mon, 09 Sep 2024 10:00:40 +0000 https://physicsworld.com/?p=116472 Guy Matthews says that the focus on public relations is masking the challenges of commercializing nuclear fusion

The post Fusion’s public-relations drive is obscuring the challenges that lie ahead appeared first on Physics World.

]]>
“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” So stated the Nobel laureate Richard Feynman during a commission hearing into NASA’s Challenger space shuttle disaster in 1986, which killed all seven astronauts onboard.

Those famous words have since been applied to many technologies, but they are becoming especially apt to nuclear fusion where public relations currently appears to have the upper hand. Fusion has recently been successful in attracting public and private investment and, with help from the private sector, it is claimed that fusion power can be delivered in time to tackle climate change in the coming decades.

Yet this rosy picture hides the complexity of the novel nuclear technology and plasma physics involved. As John Evans – a physicist who has worked at the Atomic Energy Research Establishment in Harwell, UK – recently highlighted in Physics World, there is a lack of proven solutions for the fusion fuel cycle, which involves breeding and reprocessing unprecedented quantities of radioactive tritium with extremely low emissions.

Unfortunately, this is just the tip of the iceberg. Another stubborn roadblock lies in instabilities in the plasma itself – for example, so-called Edge Localised Modes (ELMs), which originate in the outer regions of tokamak plasmas and are akin to solar flares. If not strongly suppressed they could vaporize areas of the tokamak wall, causing fusion reactions to fizzle out. ELMs can also trigger larger plasma instabilities, known as disruptions, that can rapidly dump the entire plasma energy and apply huge electromagnetic forces that could be catastrophic for the walls of a fusion power plant.

In a fusion power plant, the total thermal energy stored in the plasma needs to be about 50 times greater than that achieved in the world’s largest machine, the Joint European Torus (JET). JET operated at the Culham Centre for Fusion Energy in Oxfordshire, UK, until it was shut down in late 2023. I was responsible for upgrading JET’s wall to tungsten/beryllium and subsequently chaired the wall protection expert group.

JET was an extremely impressive device, and just before it ceased operation it set a new world record for controlled fusion energy production of 69 MJ. While this was a scientific and technical tour de force, in absolute terms the fusion energy created and plasma duration achieved at JET were minuscule. A power plant with a sustained fusion power of 1 GW would produce 86 million MJ of fusion energy every day. Furthermore, large ELMs and disruptions were a routine feature of JET’s operation and occasionally caused local melting. Such behaviour would render a power plant inoperable, yet these instabilities remain to be reliably tamed.

Complex issues

Fusion is complex – solutions to one problem often exacerbate other problems. Furthermore, many of the physics and technology features that are essential for fusion power plants and require substantial development and testing in a fusion environment were not present in JET. One example being the technology to drive the plasma current sustainably using microwaves. The purpose of the international ITER project, which is currently being built in Cadarache, France, is to address such issues.

ITER, which is modelled on JET, is a “low duty cycle” physics and engineering experiment. Delays and cost increases are the norm for large nuclear projects and ITER is no exception. It is now expected to start scientific operation in 2034, but the first experiments using “burning” fusion fuel – a mixture of deuterium and tritium (D–T) – is only set to begin in 2039. ITER, which is equipped with many plasma diagnostics that would not be feasible in a power plant, will carry out an extensive research programme that includes testing tritium-breeding technologies on a small scale, ELM suppression using resonant magnetic perturbation coils and plasma-disruption mitigation systems.

The challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed

Yet the challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed following any successful first demonstration of substantial fusion-energy production. Indeed, EUROfusion’s Research Roadmap, which the UK co-authored when it was still part of ITER, sees fusion as only making a significant contribution to global energy production in the course of the 22nd century. This may be politically unpalatable, but it is a realistic conclusion.

The current UK strategy is to construct a fusion power plant – the Spherical Tokamak for Energy Production (STEP) – at West Burton, Nottinghamshire, by 2040 without awaiting results from intermediate experiments such as ITER. This strategy would appear to be a consequence of post-Brexit politics. However, it looks unrealistic scientifically, technically and economically. The total thermal energy of the STEP plasma needs to be about 5000 times greater than has so far been achieved in the UK’s MAST-U spherical tokamak experiment. This will entail an extreme, and unprecedented, extrapolation in physics and technology. Furthermore, the compact STEP geometry means that during plasma disruptions its walls would be exposed to far higher energy loads than ITER, where the wall protection systems are already approaching physical limits.

I expect that the complexity inherent in fusion will continue to provide its advocates, both in the public and private sphere, with ample means to obscure both the severity of the many issues that lie ahead and the timescales required. Returning to Feynman’s remarks, sooner or later reality will catch up with the public relations narrative that currently surrounds fusion. Nature cannot be fooled.

The post Fusion’s public-relations drive is obscuring the challenges that lie ahead appeared first on Physics World.

]]>
Opinion and reviews Guy Matthews says that the focus on public relations is masking the challenges of commercializing nuclear fusion https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Forum-fusion-ITER.jpg
To make Mars warmer, just add nanorods https://physicsworld.com/a/to-make-mars-warmer-just-add-nanorods/ Mon, 09 Sep 2024 08:00:38 +0000 https://physicsworld.com/?p=116609 Releasing engineered nanoparticles into the Martian atmosphere could warm the planet by over 30 K

The post To make Mars warmer, just add nanorods appeared first on Physics World.

]]>
If humans released enough engineered nanoparticles into the atmosphere of Mars, the planet could become more than 30 K warmer – enough to support some forms of microbial life. This finding is based on theoretical calculations by researchers in the US, and it suggests that “terraforming” Mars to support temperatures that allow for liquid water may not be as difficult as previously thought.

“Our finding represents a significant leap forward in our ability to modify the Martian environment,” says team member Edwin Kite, a planetary scientist at the University of Chicago.

Today, Mars is far too cold for life as we know it to thrive there. But it may not have always been this way. Indeed, streams may have flowed on the red planet as recently as 600 000 years ago. The idea of returning Mars to this former, warmer state – terraforming – has long kindled imaginations, and scientists have proposed several ways of doing it.

One possibility would be to increase the levels of artificial greenhouse gases, such as chlorofluorocarbons, in Mars’ currently thin atmosphere. However, this would require volatilizing roughly 100 000 megatons of fluorine, an element that is scarce on the red planet’s surface. This means that essentially all the fluorine required would need to be transported to Mars from somewhere else – something that is not really feasible.

An alternative would be to use materials already present on Mars’ surface, such as those in aerosolized dust. Natural Martian dust is mainly made of iron-rich minerals distributed in particles roughly 1.5 microns in radius, which are easily lofted to altitudes of 60 km and more. In its current form, this dust actually lowers daytime surface temperatures by attenuating infrared solar radiation. A modified form of dust might, however, experience different interactions. Could this modified dust make the planet warmer?

Nanoparticles designed to trap escaping heat and scatter sunlight

In a proof-of-concept study, Kite and colleagues at the University of Chicago, the University of Central Florida and Northwestern University analysed the atmospheric effects of nanoparticles shaped like short rods about nine microns long, which is about the same size as commercially available glitter. These particles have an aspect ratio of around 60:1, and Kite says they could be made from readily-available Martian materials such as iron or aluminium.

Calculations using finite-difference time domains showed that such nanorods, which are randomly oriented due to Brownian motion, would strongly scatter and absorb upwelling thermal infrared radiation in certain spectral windows. The nanorods would also scatter sunlight down towards the surface, adding to the warming, and would settle out of the atmosphere and onto the Martian surface more than 10 times more slowly than natural dust. This implies that, once airborne, the nanorods would be lofted to high altitudes and remain in the atmosphere for long periods.

More efficient than previous Martian warming proposals

These factors give the nanorod idea several advantages over comparable schemes, Kite says. “Our approach is over 5000 times more efficient than previous global warming proposals (on a per-unit-mass-in-the-atmosphere basis) because it uses much less mass of material to achieve significant warming,” he tells Physics World. “Previous schemes required importing large amounts of gases from Earth or mining rare Martian resources, [but] we find that nanoparticles can achieve similar warming with a much smaller total mass.”

However, Kite stresses that the comparison only applies to approaches that aim to warm Mars’ atmosphere on a global scale. Other approaches, including one developed by researchers at Harvard University and NASA’s Jet Propulsion Laboratory (JPL) that uses silica aerogels, would be better suited for warming the atmosphere locally, he says, adding that a recent workshop on Mars terraforming provides additional context.

While the team’s research is theoretical, Kite believes it opens new avenues for exploring planetary climate modification. It could inform future Mars exploration or even long-term plans for making Mars more habitable for microbes and plants. Extensive further research would be required, however, before any practical efforts in this direction could see the light of day. In particular, more work is needed to assess the very long-term sustainability of a warmed Mars. “Atmospheric escape to space would take at least 300 million years to deplete the atmosphere at the present-day rate,” he observes. “And nanoparticle warming, by itself, is not sufficient to make the planet’s surface habitable again either.”

Kite and colleagues are now studying the effects of particles of different shapes and compositions, including very small carbon nanoparticles such as graphene nanodisks. They report their present work in Science Advances.

The post To make Mars warmer, just add nanorods appeared first on Physics World.

]]>
Research update Releasing engineered nanoparticles into the Martian atmosphere could warm the planet by over 30 K https://physicsworld.com/wp-content/uploads/2024/09/Mars.jpg
Taking the leap – how to prepare for your future in the quantum workforce https://physicsworld.com/a/taking-the-leap-how-to-prepare-for-your-future-in-the-quantum-workforce/ Fri, 06 Sep 2024 15:16:22 +0000 https://physicsworld.com/?p=116506 Katherine Skipper and Tushna Commissariat interview three experts in the quantum arena, to get their advice on careers in the quantum market

The post Taking the leap – how to prepare for your future in the quantum workforce appeared first on Physics World.

]]>
It’s official: after endorsement from 57 countries and the support of international physics societies, the United Nations has officially declared that 2025 is the International Year of Quantum Science and Technology (IYQ).

The year has been chosen as it marks the centenary of Werner Heisenberg laying out the foundations of quantum mechanics – a discovery that would earn him the Nobel Prize for Physics in 1932. As well as marking one of the most significant breakthroughs in modern science, the IYQ also reflects the recent quantum renaissance. Applications that use the quantum properties of matter are transforming the way we obtain, process and transmit information, and physics graduates are uniquely positioned to make their mark on the industry.

It’s certainly big business these days. According to estimates from McKinsey, in 2023 global quantum investments were valued at $42bn. Whether you want to build a quantum computer, an unbreakable encryption algorithm or a high-precision microscope, the sector is full of exciting opportunities. With so much going on, however, it can be hard to make the right choices for your career.

To make the quantum landscape easier to navigate as a jobseeker, Physics World has spoken to Abbie Bray, Araceli Venegas-Gomez and Mark Elo – three experts in the quantum sector, from academia and industry. They give us their exclusive perspectives and advice on the future of the quantum marketplace; job interviews; choosing the right PhD programme; and managing risk and reward in this emerging industry.

Quantum going mainstream: Abbie Bray

According to Abbie Bray, lecturer in quantum technologies at University College London (UCL) in the UK, the second quantum revolution has broadened opportunities for graduates. Until recently, there was only one way to work in the quantum sector – by completing a PhD followed by a job in academia. Now, however, more and more graduates are pursuing research in industry, where established companies such as Google, Microsoft and BT – as well as numerous start-ups like Rigetti and Universal Quantum – are racing to commercialize the technology.

Abbie Bray

While a PhD is generally needed for research, Bray is seeing more jobs for bachelor’s and master’s graduates as quantum goes mainstream. “If you’re an undergrad who’s loving quantum but maybe not loving the research or some of the really high technical skills, there’s other ways to still participate within the quantum sphere,” says Bray. With so many career options in industry, government, consulting or teaching, Bray is keen to encourage physics graduates to consider these as well as a more traditional academic route.

She adds that it’s important to have physicists involved in all parts of the industry. “If you’re having people create policies who maybe haven’t quite understood the principles or impact or the effort and time that goes into research collaboration, then you’re lacking that real understanding of the fundamentals. You can’t have that right now because it’s a complex science, but it’s a complex science that is impacting society.”

So whether you’re a PhD student or an undergraduate, there are pathways into the quantum sector, but how can you make yourself stand out from the crowd? Bray has noticed that quantum physics is not taught in the same way across universities, with some students getting more exposure to the practical applications of the field than others. If you find yourself in an environment that isn’t saturated with quantum technology, don’t panic – but do consider getting additional experience outside your course. Bray highlights PennyLane, which is a Python library for programming quantum computers, that also produces learning resources.

Consider your options

Something else to be aware of, particularly for those contemplating a PhD, is that “quantum technologies” is a broad umbrella term, and while there is some crossover between, say, sensing and computing, switching between disciplines can be a challenge. It’s therefore important to consider all your options before committing to a project and Bray thinks that Centres for Doctoral Training (CDTs) are a step in the right direction. UCL has recently launched a quantum computing and quantum communications CDT where students will undergo a six-month training period before writing their project proposal. She thinks this enables them to get the most out of their research, particularly if they haven’t covered some topics in their undergraduate degree. “It’s very important that during a PhD you do the research that you want to do,” Bray says.

When it comes to securing a job, PhD position or postdoc, non-technical skills can be just as valuable as quantum know-how. Bray says it’s important to demonstrate that you’re passionate and deeply knowledgeable about your favourite quantum topic, but graduates also need to be flexible and able to work in an interdisciplinary team. “If you think you’re a theorist, understand that it also does sometimes mean looking at and working with experimental data and computation. And if you’re an experimentalist, you’ve got to understand that you need to have a rigorous understanding of the theory before you can make any judgements on your experimentation.” As Bray summarises: “theorists and experimentalists need to move at the same time”.

The ability to communicate technical concepts effectively is also vital. You might need to pitch to potential investors, apply for grants or even communicate with the HR department so that they shortlist the best candidates. Bray adds that in her experience, physicists are conditioned to communicate their research very directly, which can be detrimental in interviews where panels want to hear narratives about how certain skills were demonstrated. “They want to know how you identified a situation, then you identified the action, then the resolution. I think that’s something that every single student, every single person right now should focus on developing.”

The quantum industry is still finding its feet and earlier this year it was reported that investment has fallen by 50% since a high in 2022. However, Bray argues that “if there has been a de-investment, there’s still plenty of money to go around” and she thinks that even if some quantum technologies don’t pan out, the sector will continue to provide valuable skills for graduates. “No matter what you do in quantum, there are certain skills and experiences that can cross over into other parts of tech, other parts of science, other parts of business.”

In addition, quantum research is advancing everything from software to materials science and Bray thinks this could kick-start completely new fields of research and technology. “In any race, there are horses that will not cross the finish line, but they might run off and cross some other finish line that we didn’t know existed,” she says.

Building the quantum workforce: Araceli Venegas-Gomez

While working in industry as an aerospace engineer, Araceli Venegas-Gomez was looking for a new challenge and decided to pursue her passion for physics, getting her master’s degree in medical physics alongside her other duties. Upon completing that degree in 2016, she decided to take on a second master’s followed by a PhD in quantum optics and simulation at the University of Strathclyde, UK. By the time the COVID-19 pandemic hit in 2020, she had defended her thesis, registered her company, and joined the University of Bristol Quantum Technology Enterprise Centre as an executive fellow.

Araceli Venegas-Gomez

It was during her studies at Strathclyde that Venegas-Gomez decided to use her vast experience across industry and academia, as well as her quantum knowledge. Thanks to a fellowship from the Optica Foundation, she was able to launch QURECA (Quantum Resources and Careers). Today, it’s a global company that helps to train and recruit individuals, while also providing business development advice for for both individuals and companies in the quantum sphere. As founder and chief executive of the firm, her aims were to link the different stakeholders in the quantum ecosystem and to raise the quantum awareness of the general public. Crucially, she also wanted to ease the skills bottleneck in the quantum workforce and to bring newcomers into the quantum ecosystem.

As Venegas-Gomez points out, there is a significant scarcity of skilled quantum professionals for the many roles that need filling. This shortage is exacerbated by the competition between academia and industry for the same pool of talent. “Five or ten years ago, it was difficult enough to find graduate students who would like to pursue a career in quantum science, and that was just in academia,” explains Venegas-Gomez. “With the quantum market booming, industry is also looking to hire from the same pool of candidates, so you have more competition, for pretty much the same number of people.”

Slow progress

Venegas-Gomez highlights that the quantum arena is very broad. “You can have a career in research, or work in industry, but there are so many different quantum technologies that are coming onto the market, at different stages of development. You can work on software or hardware or engineering; you can do communications; you can work on developing the business side; or perhaps even in patent law.” While some of these jobs are highly technical and would require a master’s or a PhD in that specific area of quantum tech, there are plenty of roles that would accept graduates with only an MSc in physics or even a more interdisciplinary experience. “If you have a background in physics and business, everyone is looking for you,” she adds.

From what she sees in the quantum recruitment market today, there is no job shortage for physicists – instead there is a dearth of physicists with the right skills for a specific role. Venegas-Gomez explains that graduates with a physics degree in many fields have transferable skills that allow them to work in “absolutely any sector that you could imagine”. But depending on the specific area of academia or industry within the quantum marketplace that you might be interested in, you will likely require some specific competences.

As Bray also stated, Venegas-Gomez acknowledges that the skills and knowledge that physicists pick up can vary significantly between universities – making it challenging for employers to find the right candidates. To avoid picking the wrong course for you, Venegas-Gomez recommends that potential master’s and PhD students speak to a number of alumni from any given institute to find out more about the course, and see what areas they work in today. This can also be a great networking strategy, especially as some cohorts can have as few as 10–15 students all keen work with these companies or university departments in the future.

Despite the interest and investment in the quantum industry, new recruits should note that it is is still in its early stages. This slow progress can lead to high expectations that are not met, causing frustration for both employers and potential employees. “Only today, we had an employer approach us (QURECA) saying that they wanted someone with three to four years’ experience in Python, and a bachelor’s or master’s degree – it didn’t have to be quantum or even physics specifically,” reveals Venegas-Gomez. “This means that [to get this particular job] you could have a background in computer science or software engineering. Having an MSc in quantum per se is not going to guarantee that you get a job in quantum technologies, unless that is something very specific that employer is looking for.”

So what specific competencies are employers across the board looking for? If an company isn’t looking for a specific technical qualification, what happens if they get two similar CVs for the same role? Do they look at an applicant’s research output and publications, or are they looking for something different? “What I find is that employers are looking for candidates who can show that, alongside their academic achievements, they have been doing outreach and communication activities,” says Venegas-Gomez. “Maybe you took on a business internship and have a good idea of how the industry works beyond university – this is what will really stand out.”

She adds that so-called soft-skills – such as demonstrating good leadership, teamwork, and excellent communication skills – are very valued. “This is an industry where highly skilled technical people need to be able to work with people vastly beyond their area of expertise. You need to be able to explain Hamiltonians or error corrections to someone who is not quantum-literate and explain the value of what you are working on.”

Venegas-Gomez is also keen that job-seekers realize that the chances of finding a role at a large firm such as Google, IBM or Microsoft are still slim-to-none for most quantum graduates. “I have seen a lot of people complete their master’s in a quantum field and think that they will immediately find the perfect job. The reality is that they likely need to be patient and get some more experience in the field before they get that dream job.” Her main advice to students is to clearly define their career goals, within the context of the booming and ever-growing quantum market, before pursuing a specific degree. The skills you acquire with a quantum degree are also highly transferable to other fields, meaning there are lots of alternatives out there even if you can’t find the right job in the quantum sphere. For example, experience in data science or software development can complement quantum expertise, making you a versatile and coveted contender in today’s job market.

Approaching “quantum advantage”: Mark Elo

Last year, IBM broke records by building the first quantum chip with more than 1000 qubits. The project represents millions of dollars of investment and the company is competing with the likes of Intel and Google to achieve “quantum advantage”, which refers to a quantum computer that can solve problems that are out of reach for classical machines.

Despite the hype, there is work to be done before the technology becomes widespread – a commercial quantum computer needs millions of qubits, and challenges in error correction and algorithm efficiency must be addressed.

Mark Elo

“We’re trying to move it away from a science experiment to something that’s more an industrial product,” says Mark Elo, chief marketing officer at Tabor Electronics. Tabor has been building electronic signal equipment for over 50 years and recently started applying this technology to quantum computing. The company’s focus is on control systems – classical electronic signals that interact with quantum states. At the 2024 APS March Meeting, Tabor, alongside its partners FormFactor and QuantWare, unveiled the first stage of the Echo-5Q project, a five-qubit quantum computer.

Elo describes the five years he’s worked on quantum computing as a period of significant change. Whereas researchers once relied on “disparate pieces of equipment” to build experiments, he says that the industry has changed such that “there are [now] products designed specifically for quantum computing”.

The ultimate goal of companies like Tabor is a “full-stack” solution where software and hardware are integrated into a single platform. However, the practicalities of commercializing quantum computing require a workforce with the right skills. Two years ago the consultancy company McKinsey reported that companies were already struggling to recruit, and they predicted that by 2025, half of the jobs in quantum computing will not be filled. Like many in the industry, Elo sees skills gaps in the sector that must be addressed to realize the potential of quantum technology.

Elo’s background is in solid-state electronics, and he worked for nearly three decades on radio-frequency engineering for companies including HP and Keithley. Most quantum-computing control systems use radio waves to interface with the qubits, so when he moved to Tabor in 2019, Elo saw his career come “full circle”, combining the knowledge from his degree with his industry experience. “It’s been like a fusion of two technologies” he says.

It’s at this interface between physics and electronic engineering where Elo sees a skills shortage developing. “You need some level of electrical engineering and radio-frequency knowledge to lay out a quantum chip,” he explains. “The most common qubit is a transmon, and that is all driven by radio waves. Deep knowledge of how radio waves propagate through cables, through connectors, through the sub-assemblies and the amplifiers in the refrigeration unit is very important.” Elo encourages physics students interested in quantum computing to consider adding engineering – specifically radio-frequency electronics – courses to their curricula.

Transferable skills

The Tabor team brings together engineers and physicists, but there are some universal skills it looks for when recruiting. People skills, for example, are a must. “There are some geniuses in the world, but if they can’t communicate it’s no good in an industrial environment,” says Elo.

Elo describes his work as “super exciting” and says “I feel lucky in the career and the technology I’ve been involved in because I got to ride the wave of the cellular revolution all the way up to 5G and now I’m on to the next new technology.” However, because quantum is an emerging field, he thinks that graduates need to be comfortable with some risk before embarking on a career. He explains that companies don’t always make money right now in the quantum sector – “you spend a lot to make a very small amount”. But, as Elo’s own career shows, the right technical skills will always allow you to switch industries if needed.

Like many others, Elo is motivated by the excitement of competing to commercialize this new technology. “It’s still a market that’s full of ideas and people marketing their ideas to raise money,” he says. “The real measure of success is to be able to look at when those ideas become profitable. And that’s when we know we’ve crossed a threshold.”

The post Taking the leap – how to prepare for your future in the quantum workforce appeared first on Physics World.

]]>
Feature Katherine Skipper and Tushna Commissariat interview three experts in the quantum arena, to get their advice on careers in the quantum market https://physicsworld.com/wp-content/uploads/2024/09/2024-09-GRADCAREERS-computing-abstract-1190168517-iStock_blackdovfx.jpg
BepiColombo takes its best images yet of Mercury’s peppered landscape https://physicsworld.com/a/bepicolombo-takes-its-best-images-yet-of-mercurys-peppered-landscape/ Fri, 06 Sep 2024 10:18:45 +0000 https://physicsworld.com/?p=116616 The spacecraft had a clear view of Mercury’s south pole for the first time during a recent flyby

The post BepiColombo takes its best images yet of Mercury’s peppered landscape appeared first on Physics World.

]]>
The BepiColombo mission to Mercury – Europe’s first craft to the planet – has successfully completed its fourth gravity-assist flyby as it uses the planet’s gravity to enter orbit around Mercury in November 2026. As it did so, the craft captured its best images yet of some of Mercury’s largest impact craters.

BepiColombo, which launched in 2018, comprises two science orbiters that will circle Mercury – the European Space Agency’s Mercury Planetary Orbiter (MPO) and the Japan Aerospace Exploration Agency’s Mercury Magnetospheric Orbiter (MMO).

The two spacecraft are travelling to Mercury as part of a coupled system. When they reach the planet, the MMO will study Mercury’s magnetosphere while the MPO will survey the planet’s surface and internal composition.

The aim of the BepiColombo mission is to provide information on the composition, geophysics, atmosphere, magnetosphere and history of Mercury.

The closest approach so far for the mission – about 165 km above the planet’s surface – took place at on 4 September. For the first time, the spacecraft had a clear view of Mercury’s south pole.

Mercury by BepiColombo

One image (top), taken by the craft’s M-CAM2 camera, features a large “peak ring basin” inside a crater measuring 210 km across, which is named after the famous Italian composer Antonio Vivaldi. The visible gap in the peak ring is thought to be where more recent lava flows have entered and flooded the crater.

BepiColombo will now conduct a fifth and sixth flyby of the planet on 1 December and 8 January 2025, respectively, before arriving in November 2025. The mission is planned to operate until 2029.

The post BepiColombo takes its best images yet of Mercury’s peppered landscape appeared first on Physics World.

]]>
Blog The spacecraft had a clear view of Mercury’s south pole for the first time during a recent flyby https://physicsworld.com/wp-content/uploads/2024/09/Mercury_reveals_its_Four_Seasons-small.jpg
Hybrid quantum–classical computing chips and neutral-atom qubits both show promise https://physicsworld.com/a/hybrid-quantum-classical-computing-chips-and-neutral-atom-qubits-both-show-promise/ Thu, 05 Sep 2024 15:33:03 +0000 https://physicsworld.com/?p=116604 Equal1’s Elena Blokhina and Harvard’s Brandon Grinkemeyer are our guests

The post Hybrid quantum–classical computing chips and neutral-atom qubits both show promise appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast looks at quantum computing from two different perspectives.

Our first guest is Elena Blokhina, who is chief scientific officer at Equal1 – an award-winning company that is developing hybrid quantum–classical computing chips. She explains why Equal1 is using quantum dots as qubits in its silicon-based quantum processor unit.

Next up is Brandon Grinkemeyer, who is a PhD student at Harvard University working in several cutting-edge areas of quantum research. He is a member of Misha Lukin’s research group, which is active in the fields of quantum optics and atomic physics and is at the forefront of developing  quantum processors that use arrays of trapped atoms as qubits.

The post Hybrid quantum–classical computing chips and neutral-atom qubits both show promise appeared first on Physics World.

]]>
Podcasts Equal1’s Elena Blokhina and Harvard’s Brandon Grinkemeyer are our guests https://physicsworld.com/wp-content/uploads/2024/09/Brandon-and-Elena.jpg newsletter
Researchers with a large network of unique collaborators have longer careers, finds study https://physicsworld.com/a/researchers-with-a-large-network-of-unique-collaborators-have-longer-careers-finds-study/ Thu, 05 Sep 2024 15:00:02 +0000 https://physicsworld.com/?p=116555 Female scientists tend to work in more tightly connected groups than men, which can negatively impact their careers

The post Researchers with a large network of unique collaborators have longer careers, finds study appeared first on Physics World.

]]>
Are you keen to advance your scientific career? If so, it helps to have a big network of colleagues and a broad range of unique collaborators, according to a new analysis of physicists’ publication data. The study also finds that female scientists tend to work in more tightly connected groups than men, which can hamper their career progression.

The study was carried out by a team led by Mingrong She, a data analyst at Maastricht University in the Netherlands. It examined the article history of more than 23 000 researchers who had published at least three papers in American Physical Society (APS) journals. Each scientist’s last paper had been published before 2015, suggesting their research career had ended (arXiv:2408.02482).

To measure “collaboration behaviour”, the study noted the size of each scientist’s collaborative network, the reoccurrence of collaborations, the “interconnectivity” of the co-authors and the average number of co-authors per publication. Physicists with larger networks and a greater number of unique collaborators were found to have had longer careers and been more likely to become principal investigators, as given by their position in the author list.

On the other hand, publishing repeatedly with the same highly interconnected co-authors is associated with shorter careers and a lower chance of achieving principal investigator status, as is having a larger average number of coauthors.

The team also found that the more that physicists publish with the same co-authors, the more interconnected their networks become. Conversely, as network size increases, networks tended to be less dense and repeat collaboration less frequent.

Close-knit collaboration

In terms of gender, the study finds that women have more interconnected networks and a higher average number of co-authors than men. Female physicists are also more likely to publish repeatedly with the same co-authors, with women therefore being less likely than men to become principal investigators. Male scientists also have longer overall careers and stay in science longer after achieving principal investigator status than women, the study finds.

Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities

Mingrong She

“Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities [and] increases the probability of establishing connections with prominent researchers and institutions,” She told Physics World. Diverse collaboration also “mitigates the risk of being confined to a narrow niche and enhances adaptability,” She adds,” both of which are indispensable for long-term career growth”.

Close-knit collaboration networks can be good for fostering professional support, the study authors state, but they reduce opportunities for female researchers to form new professional connections and lower their visibility within the broader scientific community. Similarly, larger numbers of co-authors dilute individual contributions, making it harder for female researchers to stand out.

She says the study “highlights how the structure of collaboration networks can reinforce existing inequalities, potentially limiting opportunities for women to achieve career longevity and progression”. Such issues could be improved with policies that help scientists to engage a wider array of collaborators, rewarding and encouraging small-team publications and diverse collaboration. Policies could include adjustments to performance evaluations and grant applications, and targeted training programmes.

The study also highlights lower mobility as a major obstacle for female scientists, suggesting that better childcare support, hybrid working and financial incentives could help improve the mobility and network size of female scientists.

The post Researchers with a large network of unique collaborators have longer careers, finds study appeared first on Physics World.

]]>
News Female scientists tend to work in more tightly connected groups than men, which can negatively impact their careers https://physicsworld.com/wp-content/uploads/2024/09/social-network-505782242-iStock_Ani_Ka.jpg
Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner https://physicsworld.com/a/shrinivas-kulkarni-curiosity-and-new-technologies-inspire-shaw-prize-in-astronomy-winner/ Thu, 05 Sep 2024 10:58:03 +0000 https://physicsworld.com/?p=116565 "No shortage of phenomena to explore," says expert on variable and transient objects

The post Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner appeared first on Physics World.

]]>
What does Shrinivas Kulkarni finds fascinating? When I asked him that question I expected an answer related to his long and distinguished career in astronomy. Instead, he talked about how the skin of sharks has a rough texture, which seems to reduce drag – allowing the fish to swim faster. He points out that you might not win a Nobel prize for explaining the hydrodynamics of shark skin, but it is exactly the type of scientific problem that captivates Kulkarni’s inquiring mind.

But don’t think that Kulkarni – who is George Ellery Hale Professor of Astronomy and Planetary Sciences at the California Institute of Technology (Caltech) – is whimsical when it comes to his research interests. He says that he is an opportunist, especially when it comes to technology, which he says makes some research questions more answerable than others. Indeed, the scientific questions he asks are usually guided by his ability to build technology that can provide the answers.

Kulkarni won the 2024 Shaw Prize in Astronomy for his work on variable and transient astronomical objects. He says that the rapid development of new and powerful technologies has meant that the last few decades been a great time to study such objects. “Thirty years ago, the technology was just not there,” he recalls, “optical sensors were too expensive and the necessary computing power was not available.

Transient and variable objects

Kulkarni told me that there are three basic categories of transient and variable objects. One category covers objects that change position in the sky – with examples including planets and asteroids. A second category includes objects that oscillate in terms of their brightness.

“About 10% of stars in the sky do not shine steadily like the Sun,” he explains. “We are lucky that the Sun is an extremely steady star. If its output varied by just 1% it would have a huge impact on Earth – much larger than the current global warming. But many stars do vary at the 1% level for a variety of reasons.” These can be rotating stars with large sunspots or stars eclipsing in binary systems, he explains.

It might surprise you that every second, somewhere in the universe, there is a supernova

The third and most spectacular category involve stars that undergo rapid and violent changes such as stars that explode as supernovae. “It might surprise you that every second, somewhere in the universe, there is a supernova. Some are very faint, so we don’t see all of them, but with the Zwicky Transient Facility (ZTF) we see about 20,000 supernovae per year.” Kulkarni is principal investigator for the ZTF, and his leadership at that facility is mentioned in his Shaw Prize citation.

Kulkarni explains that astronomers are interested in transient and variable objects for many different reasons. Closer to home, scientists monitor the skies for asteroids that may be on collision courses with Earth.

“In 1908 there was a massive blast in Siberia called the Tunguska event,” he says. This is believed to be the result of the air explosion of a rocky meteor that was about 55 m in diameter. Because it happened in a remote part of the world, only three people are known to have been killed. Kulkarni points out that if such a meteor struck a populated area like Southern California, it would be catastrophic. By studying and cataloguing asteroids that could potentially strike Earth, Kulkarni believes that we could someday launch space missions that nudge away objects on collision courses with Earth.

Zwicky Transient Facility

At the other end of the mass and energy range, Kulkarni says that studying spectacular events such as supernovae provides important insights into origins of many of the elements that make up the Earth and indeed ourselves. He says that over the past 70 years astronomers have made “amazing progress” in understanding how different elements are created in these explosions.

Kulkarni was born in1956 in Kurundwad, which is in the Indian state of Maharashtra. In 1978, he graduated with an MS degree in physics from the Indian Institute of Technology in New Delhi. His next stop was the University of California, Berkeley, where he completed a PhD in astronomy in 1983. He joined Caltech in 1985 and has been there ever since.

You could say that I live on adrenaline and I want to produce something very fast, making significant progress in in a short time

A remarkable aspect of Kulkarni’s career is his ability to switch fields every 5–10 years, something that he puts down to his curious nature. “After I understand something to a reasonable level, I lose interest because the curiosity is gone,” he says. Kulkarni adds that his choice of a new project is guided by his sense of whether rapid progress can be made in the field. “You could say that I live on adrenaline and I want to produce something very fast, making significant progress in in a short time”.

He gives the example of his work on gamma-ray bursts, which are some of the most powerful explosions in the universe. He says that this was a very fruitful field when astronomers were discovering about one burst per month. But then the Neil Gehrels Swift Observatory was launched in 2004 and it was able to detect 100 or so gamma-ray bursts per year.

Looking for new projects

At this point, Kulkarni says that studying bursts became a “little industry” and that’s why he left the field. “All the low-hanging fruit had been picked – and when the fruit is higher on the tree, that is when I start looking for new projects”.

It is this restlessness that first got him involved in the planning and operation of two important instruments, the Palomar Transient Factory (PTF) and its successor the Zwicky Transient Facility (ZTF). These are wide-field sky astronomical surveys that look for rapid changes in the brightness or position of astronomical objects. The PTF began observing in 2009 and the ZTF took over in 2018.

Kulkarni says that he is fascinated by the engineering aspects of astronomy and points out that technological advances in sensors, electronics, computing and automation continue to transform how observational astronomy is done. He explains that all of these technological factors came together in the design and operation of the PTF and the ZTF.

His involvement with PTF and ZTF allowed Kulkarni to make many exciting discoveries during his career. However, his favourite variable object is one that he discovered in 1982 while doing a PhD under Donald Backer. Called PSR B1937+21, it is the first millisecond pulsar ever to be to observed. It is a neutron star that rotates more than 600 times per second while broadcasting a beam of radio waves much like a lighthouse.

“I was there [at the Arecibo Observatory] all alone… it was very thrilling,” he says. The discovery provided insights into the density of neutron stars and revitalized the study of pulsars, leading to large-scale surveys that target pulsars.

When you find a new class of objects, there’s a certain thrill knowing that you and your students are the only people in the world to have seen something

Another important moment for Kulkarni occurred in 1994, when he and his graduate students were the first to observe a cool brown dwarf. These are objects that weigh in between gas-giant planets (like Jupiter) and small main-sequence stars. “When you find a new class of objects, there’s a certain thrill knowing that you and your students are the only people in the world to have seen something. That was kind of fun.”

Kulkarni is proud of his early achievements, but don’t think that he dwells on the past. “This is a fantastic time to do astronomy. The instruments that we’re building today have an enormous capacity for information delivery.”

First brown dwarf

He mentions images released by the European Space Agency’s Euclid space telescope, which launched last year. He describes them as “gorgeous pictures” but points out that the real wonder is that he could zoom in on the images by a factor of 10 before the pixels became apparent. “It was just so rich, a single image is maybe a square degree of the sky. The resolution is just amazing.”

And when it comes to technology, Kulkarni is adamant that it’s not only bigger and more expensive telescopes that are pushing the frontiers of astronomy. “There is more room sideways,” he says, meaning that much progress can be made by repurposing existing facilities.

Indeed, ZTF and PTF both use (used)  the  Samuel Oschin telescope at the Palomar Observatory in California. This is a 48-inch (1.3 metre) facility that saw first light 75 years ago. With new instruments, old telescopes can be used to study the sky “ferociously” he says.

Kulkarni told me that even he was surprised at the number of papers that ZTF data have spawned since the facility came online in 2018. One important reason, says Kulkarni, is that ZTF immediately shares its data freely with astronomers around the world. Indeed, it is the explosion in data from facilities like the ZTF along with rapid improvements in data processing that Kulkarni believes has put us in a  golden age of astronomy.

Beyond the technology, Kulkarni says that the very nature of the cosmos means that there will always be opportunities for astronomers. He muses that the universe has been around for nearly 14 billion years and has had “many opportunities to do some very strange things – and a very long time to cook up those things – so there’s no shortage of phenomena to explore”.

Great time to be an astronomer

So it is a great time to consider a career in astronomy and Kulkarni’s advice to aspiring astronomers is to be pragmatic about how they approach the field. “Figure out who you are and not you want to be,” he says. “If you want to be an astronomer. There are roughly three categories open to you. You can be a theorist who puts a lot of time understand the physics, and especially the mathematics, that are used to make sense of astronomical observations.”

At the other end of the spectrum are the astronomers who build the “gizmos” that are used to scan the heavens – generating the data that the community rely on. The third category, says Kulkarni, falls somewhere between these two extremes and includes the modellers. These are the people who take the equations developed by the theorists and create computer models that help us understand observational data.

“Astronomy is a fantastic field and things are really happening in a very big way.” He asks new astronomers to, “Bring a fresh perspective, bring energy, and work hard”. He also says that success comes to those who are willing to reflect on their strengths and weaknesses. “Life is a process of continual improvement, continual education, and continual curiosity.”

The post Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner appeared first on Physics World.

]]>
Analysis "No shortage of phenomena to explore," says expert on variable and transient objects https://physicsworld.com/wp-content/uploads/2024/09/5-9-24-Kulkarni_Shri-Faculty.jpg newsletter
Twisted fibres capture more water from fog https://physicsworld.com/a/twisted-fibres-capture-more-water-from-fog/ Wed, 04 Sep 2024 14:00:41 +0000 https://physicsworld.com/?p=116579 New finding could allow more fresh water to be harvested from the air

The post Twisted fibres capture more water from fog appeared first on Physics World.

]]>
Twisted fibres are more efficient at capturing and transporting water from foggy air than straight ones. This finding, from researchers at the University of Oslo, Norway, could make it possible to develop advanced fog nets for harvesting fresh water from the air.

In many parts of the world, fresh water is in limited supply and not readily accessible. Even in the driest deserts, however, the air still contains some humidity, and with the right materials, it is possible to retrieve it. The simplest way of doing this is to use a net to catch water droplets that condense on the material for later release. The most common types of net for this purpose are made from steel extruded into wires; plastic fibres and strips; or woven poly-yarn. All of these have uniform cross-sections and are therefore relatively smooth and straight.

Nature, however, abounds with slender, grooved and bumpy structures that plants and animals have evolved to capture water from ambient air and quickly transport droplets where they need to go. Cactus spines, nepenthes plants, spider spindle silk and Namib desert beetle shells are just a few examples.

From “barrel” to “clamshell”

Inspired by these natural structures, Vanessa Kern and Andreas Carlson of the mechanics section in Oslo’s Department of Mathematics placed water droplets on two vertical threads that they had mechanically twisted together. They then recorded the droplets’ flow paths using high-speed imaging.

By changing the tightness, or wavelength, of the twist, the researchers were able to control when the droplet changed from its originally symmetric “barrel” shape to an asymmetric “clamshell” configuration. This allowed the researchers to speed up or slow down the droplets’ flow. While this is not the first time that scientists have succeeded in changing the shapes of droplets sliding down fibres, most previous work focused on perfectly wetting liquids, rather than partially wetting ones as was the case here.

Once they understood the droplets’ dynamics, Kern and Carlson designed nets that could be pre-programmed with anti-clogging properties. They then analysed the twisted fibres’ ability to collect water from fog flowing through an experimental wind tunnel, plotting the fibres’ water yield as a function of how much they were twisted.

Grooves that work as a water slide

The Oslo team found that the greater the number of twists, the more water the fibres captured. Notably, the increase was greater than would be expected from an increase in surface area alone. The team say this implies that the geometry of the twists is more important than area in increasing fog capture.

“Introducing a twist allowed us to effectively form grooves that work as a water slide as it stabilises a liquid film,” Kern explains. “This alleviates the well-known problem of straight fibres, where droplets would get stuck/pinned.”

The twisted fibres would make good fog nets, adds Carlson. “Fog nets are typically made up of plastic fibres and used to harvest fresh water from fog in arid regions such as in Morocco. Our results indicate that these twisted fibres could indeed be beneficial in terms of increasing the efficiency of such nets compared to straight fibres.”

The researchers are now working on testing their twisted fibres in a wider range of wind and fog conditions. They hope these tests will show which environments the fibres work best in, and where they might be most suitable for water harvesting. “We also want to move towards conditions closer to those found in the field,” they say. “There are still many open questions about the small-scale physics of the flow inside the grooves between these fibres that we want to answer too.”

The study is detailed in PNAS.

The post Twisted fibres capture more water from fog appeared first on Physics World.

]]>
Research update New finding could allow more fresh water to be harvested from the air https://physicsworld.com/wp-content/uploads/2024/09/24-02252-1.jpg
Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us https://physicsworld.com/a/robot-cooked-pizza-delivered-to-your-door-heres-what-zumes-failure-tells-us/ Wed, 04 Sep 2024 10:00:33 +0000 https://physicsworld.com/?p=116354 James McKenzie looks at the reasons behind the failure of the Zume robotic pizza-delivery business

The post Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us appeared first on Physics World.

]]>
A red truck and small car behind it, from the Zume Pizza company

“The $500 million robot pizza start-up you never heard of has shut down, report says.”

Click-bait headlines don’t always tell the full story and this one is no exception. It appeared  last year on the Business Insider website and concerned Zume – a Silicon Valley start-up backed by Japanese firm SoftBank, which once bought chip-licensing firm Arm Holdings. Zume proved to be one of the biggest start-up failures in 2023, burning through nearly half a billion dollars of investment (yes, half a billion dollars) before closing down.

Zume was designed to deliver pizzas to customers in vans, with the food prepared by robots and cooked in GPS-equipped automated ovens. The company was founded in 2015 as Zume Pizza, delivering its first pizzas the year after. But according to Business Insider, which retold a story from The Information, Zume struggled with problems like “stopping melted cheese from sliding off its pizzas while they cooked in moving trucks”.

It’s easy to laugh, but the headline from Business Insider belittled the start-up founders and their story

It’s easy to laugh, but the headline from Business Insider belittled the start-up founders and their story. Unless you’ve set up your own firm, you probably won’t understand the passion, dedication and faith needed to found or join a start-up team. Still, from a journalistic point of view, the headline did the trick in that it encouraged me to delve further into the story. Here’s what I think we can learn from the episode.

A new spin on pizza

On the face of it, Zume is a brilliant and compelling idea. You’re taking the two largest costs of the pizza-delivery business – chefs to cook the food and premises to house the kitchen – and removing them from the equation. Instead, you’re replacing them with delivery vehicles that make the food automatically en-route, potentially even using autonomous vehicles that don’t need human drivers either. What could possibly go wrong?

Zume, which quickly raised $6m in Series A investment funding in 2016, delivered its first pizzas in September of that year. The company secured a patent on cooking during delivery, which included algorithms to predict customer choices. It also planned to work with other firms to provide further robot-prepared food, such as salads and desserts.

By 2018 the concept had captured the imagination of major investors such as SoftBank, which saw the potential for Zume to become the “Amazon of pizza”. The scope was huge: the overall global pizza market was worth $197bn in 2021 and is set to grow to $551bn by 2031, according to market research firm Business Research Insights. So it should be possible to grab a piece of the pie with enough funding and focused, disruptive innovation.

But with customers complaining about the robotic pizzas, the company said in 2018 it was moving in new directions. Instead, it now planned to use artificial intelligence (AI) and its automated production technology for automated food trucks and would form a larger umbrella company – Zume, Inc. It also planned to start licensing its automation technology.

In November 2018 the company raised $375m from SoftBank, now making it worth an eye-popping $2.25bn. It then started focusing on automated production and packaging for other food firms, including buying Pivot – a company that made sustainable, plant-based packaging. By 2020 it was concentrating fully on compostable food packaging and then laid off more than 500 staff, including its entire robotics and delivery truck teams.

Sadly, Zume, Inc was unable to sustain enough sales or bring in enough external funding. Investment cash started running precariously low and in June 2023 the firm was eventually shut down, leaving “joke” headlines about cheese sliding off. How very sad for all involved, but this was only a small part of the issues the company faced.

Inside Zume

Many have speculated where it all went wrong for Zume. To me, the problem seemed to be execution and understanding of the market. The food industry is dominated by lots of dominant established brands, big advertising budgets and huge promotions. When faced with these kinds of challenges, any new business must work out how to compete, break into and disrupt this kind of established market.

Once I looked into what happened at Zume, it wasn’t quite as amazing as I initially thought. To my mind, the logical thing would have been to have all the operations on the truck. But according to a video released by the company in 2016, that’s not what they did. Instead, Zume built an overly complex robot production line in a larger space than a traditional pizza outlet to make the pizzas.

The food was then loaded onto trucks and cooked en-route in a van equipped with 56 automated ovens. Each was timed so that the pizza would be ready shortly before it arrived at the customer’s address. Zume had an app and aimed to cut the delivery time from order to delivery to 22 minutes – which was pretty good. But the app in itself wasn’t a big innovation; after all Domino’s first had one as far back as 2010.

In American start-up culture, failure is not an embarrassment. It’s seen as a learning experience, and looking at the mistakes of history can yield some valuable insights. But then I stumbled upon a really great article by a firm called Legendary Supply Chain that spelled out clearly what happened. Turns out, what really went wrong was Zume’s lack of understanding of the drivers and economics of the pizza-delivery business.

The 3Ps of pizza

Pizzas have a tiny profit margin. But Zume created massive capital costs by developing automation systems, which meant they’d have to sell loads of pizza to make enough return on investment. Worse still, using FedEx-sized trucks to deliver individual pizzas was inherently wasteful and impractical. That’s why you’ll usually see most pizza delivery drivers on bicycles, mopeds or cars, which are a far more cost-effective means of delivery.

You could say that Zume re-invented the wheel by re-creating – at great cost – the automation you find in frozen-pizza factories and applying it to a much smaller scale operation. It also seems that the firm didn’t focus enough on the product or what the customers wanted – and instead seemed to solve problems that didn’t exist. In short, the execution was poor and the $400m raised rather went to managers’ heads.

Countless successful companies prove what’s vital are the “3Ps”: product, price and promotion. People buy pizza on an impulse. For me, whenever the idea of a pizza pops into my head, I want something that’s yummy, saves me from cooking and perhaps reminds me of Italian holidays. According to customer feedback, Zume’s pizza was only “okay”. Apart from the cheese occasionally sliding off, it wasn’t any better or worse than anything else.

As far as price was concerned, Zume’s pizzas ought to have been cheaper to make and deliver than rival firms. However, Zume charged a premium price on account of the food being slightly fresher as it was cooked while being delivered. Customers, unfortunately, didn’t buy into this argument sufficiently. I’m not sure what Zume did to promote their products, but with all that money sloshing around, they certainly had more than enough to create a brand.

Zume’s failure won’t be the last attempt to disrupt or break into the pizza-delivery market – and learning from past mistakes could well help

I’m sure Zume’s failure won’t be the last attempt to disrupt or break into the pizza-delivery market – and learning from past mistakes could well help. In fact, I can see why putting sufficiency low-cost automation on a fleet of small vans – coupled with low-cost, central supply depots – might make the economics more favourable. But anyone wanting to revolutionize pizza delivery will have to map out the costs and economics of pizza delivery to get funded and have some good answers to where Zume went wrong.

The odds for start-up success are not good. As I’ve mentioned before, almost 90% of start-ups in the UK survive their first year, but fewer than half make it beyond five years. To get there – whether you’re making pizzas or photodetectors – you’ll need a good plan, a great team, a degree of luck and good timing to compete in the market. But if you do succeed, the rewards are clear.

The post Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us appeared first on Physics World.

]]>
Opinion and reviews James McKenzie looks at the reasons behind the failure of the Zume robotic pizza-delivery business https://physicsworld.com/wp-content/uploads/2024/08/2024-09-Transactions-Pizzaslice_feature.jpg newsletter
Quark distribution in light–heavy mesons is mapped using innovative calculations https://physicsworld.com/a/quark-distribution-in-light-heavy-mesons-is-mapped-using-innovative-calculations/ Wed, 04 Sep 2024 07:27:27 +0000 https://physicsworld.com/?p=116554 Form factors can be tested by collider experiments

The post Quark distribution in light–heavy mesons is mapped using innovative calculations appeared first on Physics World.

]]>
The distribution of quarks inside flavour-asymmetric mesons has been mapped by Yin-Zhen Xu of the University of Huelva and Pablo de Olavide University in Spain. These mesons are strongly interacting particles composed of a quark and an antiquark, one heavy and one light.

Xu employed the Dyson–Schwinger/Bethe–Salpeter equation technique to calculate the heavy–light meson electromagnetic form factors, which can be measured in collider experiments. These form factors provide invaluable information about the properties of the strong interactions as described by quantum chromodynamics.

“The electromagnetic form factors, which describe the response of composite particles to electromagnetic probes, provide an important tool for understanding the structure of bound states in quantum chromodynamics,” explains Xu. “In particular, they can be directly related to the charge distribution inside hadrons.”

From numerous experiments, we know that particles that interact via the strong force (such as protons and neutrons) consist of quarks bound together by gluons. This similar to how nuclei and electrons are bound into atoms through the exchange of photons as described by quantum electrodynamics. However, doing precise calculations in quantum chromodynamics is nearly impossible, and this makes predicting the internal structure of hadrons extremely challenging.

Approximation techniques

To address this challenge, scientists have developed several approximation techniques. One such method is the lattice approach, which replaces the infinite number of points in real space with a finite grid, making calculations more manageable. Another effective method involves solving the Dyson–Schwinger/Bethe–Salpeter equations. They ignore certain subtle effects in the strong interactions of quarks with gluons, as well as the virtual quark–antiquark pairs that are constantly being born and disappearing in the vacuum.

Xu’s new study is described in the Journal for High Energy Physics, utilized the Dyson-Schwinger/Bethe-Salpeter approach to investigate the properties of hadrons made of quarks and antiquarks of different types (or flavors) with significant mass differences. For instance, K-mesons are composed of a strange antiquark with a mass of around 100 MeV and an up or down quark with a mass of only a few megaelectronvolts. The substantial difference in quark masses simplifies their interaction, which allows the author to extract more information about the structure of flavour-asymmetric mesons.

Xu began his study by calculating the masses of mesons and compared these results with experimental data. He found that the Dyson–Schwinger/Bethe–Salpeter method produced results comparable to the best previously used methods, validating his approach.

Deducing quark distributions

Xu’s next step was to deduce the distribution of quarks within the mesons. Quantum effects prevent particles from being localized in space, so he calculated the probability of their presence in certain regions, whose size depends on the properties of the quarks and their interactions with surrounding particles.

Xu discovered that the heavier the quark, the more localized it is within the meson with the difference in the distribution range reaching more than ten times. For instance, in B-mesons, the distribution range for a bottom antiquark is much smaller (0.07 fm) compared to that for the much lighter up or down quarks (0.80 fm). In contrast, the distribution range for two light quarks inside π-mesons is almost equal.

Using these quark distributions, Xu then computed the electromagnetic form factors, which encode the details of charge and current distribution within the mesons. The values he obtained closely matched the available experimental data.

In his work, Xu has shown that the Dyson–Schwinger/Bethe–Salpeter technique is particularly well-suited for studying heavy-light mesons, often surpassing even the most sophisticated and resource-intensive methods used previously.

Room for refinement

Although Xu’s results are promising, he admits that there is room for refinement. On the experimental side, measuring some currently unknown form factors could allow comparisons with his computed values to further verify the method’s consistency.

From a theoretical perspective, more details about strong interactions within mesons could be incorporated into the Dyson–Schwinger/Bethe–Salpeter method to enhance computational accuracy. Additionally, other meson parameters can be computed using this approach, allowing more extensive comparisons with experimental data.

“Based on the theoretical framework applied in this work, other properties of heavy–light mesons, such as various decay rates, can be further investigated,” concludes Xu.

The study also provides a powerful tool for exploring the intricate world of strongly interacting subatomic particles, potentially opening new avenues in particle physics research.

The calculations are described in The Journal of High Energy Physics.

The post Quark distribution in light–heavy mesons is mapped using innovative calculations appeared first on Physics World.

]]>
Research update Form factors can be tested by collider experiments https://physicsworld.com/wp-content/uploads/2024/09/3-09-24-quantum-entanglement-web-465535389-iStock_Traffic-Analyzer.jpg newsletter1
Estonia becomes first Baltic state to join CERN https://physicsworld.com/a/estonia-becomes-first-baltic-state-to-join-cern/ Tue, 03 Sep 2024 12:45:08 +0000 https://physicsworld.com/?p=116548 The Baltic nation is now the 24th member state of the Geneva-based particle-physics lab

The post Estonia becomes first Baltic state to join CERN appeared first on Physics World.

]]>
Estonia is the first Baltic state to become a full member of the CERN particle-physics lab near Geneva. The country, which has a population of 1.3 million, formally became the 24th CERN member state on 30 August. Estonia is now expected to pay around €1.5m each year in membership fees.

Celebrating its 70th anniversary this year, CERN’s member countries, which include France, Germany and the UK, pay costs towards CERN’s programmes and sit on the lab’s governing council. Full membership also allows a country’s nationals to become CERN staff and for its firms to bid for CERN contracts. The lab also has 10 “associate member” and four countries or organizations with “observer” status, such as the US.

Accelerating collaborations

A first cooperation agreement between Estonia and CERN was signed in 1996, which was followed by a second agreement in 2010 with the country paying about €300,000 each year to the lab. Estonia formally applied for CERN membership in 2018 and on 1 February 2021 the country became an associate member state “in the pre-stage” to fully joining CERN.

Physicists in Estonia are already part of the CMS collaboration at the lab’s Large Hadron Collider (LHC) and they participate in data analysis and the Worldwide LHC Computing Grid (WLCG), in which a “tier 2” centre is located in Tallinn. Scientists from Estonia also contribute to other CERN experiments including CLOUD, COMPASS, NA66 and TOTEM, as well as work on future collider designs.

Estonia’s president, Alar Karis, who trained as a bioscientist, says he is “delighted” with the country’s full membership. “CERN accelerates more than tiny particles, it also accelerates international scientific collaboration and our economies,” Karis adds. “We have seen this potential during our time as associate member state and we are keen to begin our full contribution.”

CERN director general Fabiola Gianotti says she is “very pleased to welcome Estonia” as a full member. “I am sure the country and its scientific community will benefit from increased opportunities in fundamental research, technology development, and education and training.”

The post Estonia becomes first Baltic state to join CERN appeared first on Physics World.

]]>
News The Baltic nation is now the 24th member state of the Geneva-based particle-physics lab https://physicsworld.com/wp-content/uploads/2024/09/Estonia-flag-1495336833-iStock_Peter-Ekvall.jpg
Akiko Nakayama: the Japanese artist skilled in fluid mechanics https://physicsworld.com/a/akiko-nakayama-the-japanese-artist-skilled-in-fluid-mechanics/ Tue, 03 Sep 2024 10:00:11 +0000 https://physicsworld.com/?p=116458 Sidney Perkowitz explores the science behind the work of Japanese painter Akiko Nakayama

The post Akiko Nakayama: the Japanese artist skilled in fluid mechanics appeared first on Physics World.

]]>
Any artist who paints is intuitively an expert in empirical fluid mechanics, manipulating liquid and pigment for aesthetic effect. The paint is usually brushed onto a surface material, although it can also be splattered onto a horizontal canvas in a technique made famous by Jackson Pollock or even layered on with a palette knife, as in the works of Paul Cezanne or Henri Matisse. But however the paint is delivered, once it dries, the result is always a fixed, static image.

Japanese artist Akiko Nakayama is different. Based in Tokyo, she makes the dynamic real-time flow of paint, ink and other liquids the centre of her work. Using a variety of colours, she encourages the fluids to move and mix, creating gorgeous, intricate patterns that transmute into unexpected forms and shades.

What also sets Nakayama apart is that she doesn’t work in private. Instead, she performs public “Alive painting” sessions, projecting her creations onto large surfaces, to the accompaniment of music. Audiences see the walls of the venue covered with coloured shapes that arise from natural processes modified by her intervention. The forms look abstract, but in their mutations often resemble living creatures in motion.

Inspired by ink

Born in 1988, Nakayama was trained in conventional techniques of Eastern and Western painting, earning degrees in fine art from Tokyo Zokei University in 2012 and 2014. Her interest in dynamic art goes back to a childhood calligraphy class, where she found herself enthralled by the beauty of the ink flowing in the water while washing her brush.

“It was more beautiful than the characters [I had written],” she recalls, finding herself “fascinated by the freedom of the ink”. Later, while learning to draw, she always preferred to capture a “moment of time” in her sketches. Eventually, Nakayama taught herself how to make patterns from moving fluids, motivated by Johann Wolfgang von Goethe’s treatise Theory of Colours (1810).

Best known as a writer, Goethe also took a keen interest in science and his book critiques Isaac Newton’s work on the physical properties of light. Goethe instead offered his own more subjective insights into his experiments with colour and the beauty they produce. Despite its flaws as a physical theory of light, reading the book encouraged Nakayama to develop methods to pour and agitate various paints in Petri dishes, and to project the results in real time using a camera designed for close-up viewing.

Akiko Nakayama stands bottom right of a large screen that displays the artwork she is creating on stage

She started learning about liquids, reading research papers and even began examining the behaviour of water droplets under strobe lights. Nakayama also looked into studies of zero gravity on liquids by JAXA, the Japanese space agency. After finding a 10 ml sample of ferrofluid – a nanoscale ferromagnetic colloidal liquid – in a student science kit, she started using the material in her presentations, manipulating it with a small, permanent magnet.

Nakayama’s art has an unexpected link with space science because ferrofluids were invented in 1963 by NASA engineer Steve Papell, who sought a way to pump liquid rocket fuel in microgravity environments. By putting tiny iron oxide particles into the fuel, he found that the liquid could be drawn into the rocket engine by an electromagnet. Ferrofluids were never used by NASA, but they have many applications in industry, medicine and consumer products.

Secret science of painting

Having presented dozens of live performances, exhibitions and commissioned works in Japan and internationally over the last decade, other scientific connections have emerged for Nakayama. She has, for example, mixed acrylic ink with alcohol, dropping the fluid onto a thin layer of acrylic paint to create wonderfully intricate branched, tree-like dendritic forms.

In 2023 her painting caught the attention of materials scientists San To Chan and Eliot Fried at the Okinawa Institute of Science and Technology in Japan. They ended up working with Nakayama to analyse dendritic spreading in terms of the interplay of the different viscosities and surface tensions of the fluids (figure 1).

1 Magic mixtures

Images of 15 ink blots that have spread different amounts

When pure ink is dropped onto an acrylic resin substrate 400 mm thick, it remains fairly static over time (top). But if isopropanol (IPA) is mixed into the ink, the combined droplet spreads out to yield intricate, tree-like dendritic patterns. Shown here are drops with IPA at two different volume concentrations: 16.7% (middle) and 50% (bottom).

Chan and Fried published their findings, concluding that the structures have a fractal dimension of 1.68, which is characteristic of “diffusion-limited aggregation” – a process that involves particles clustering together as they diffuse through a medium (PNAS Nexus 3 59).

The two researchers also investigated the liquid parameters so that an experimentalist or artist could tune the arrangement to vary the dendritic results. Nakayama calls this result a “map” that allows her to purposefully create varied artistic patterns rather than “going on an adventure blindly”. Chan and Fried have even drawn up a list of practical instructions so that anyone inclined can make their own dendritic paintings at home.

Another researcher who has also delved into the connection between fluid dynamics and art is Roberto Zenit, a mechanical engineer at Brown University in the US. Zenit has shown that Jackson Pollock created his famous abstract compositions by carefully controlling the motion of viscous filaments (Phys. Rev. Fluids 4 110507). Pollock also avoided hydrodynamic instabilities that would have otherwise made the paint break up before it hit the canvas (PLOS One 14 e0223706).

Deeper meanings

Although Nakayama likes to explore the science behind her artworks, she has not lost sight of the deeper meanings in art. She told me, for example, that the bubbles that sometimes arise as she creates liquid shapes have a connection with the so-called “Vanitas” tradition in art that emerged in western Europe in the 16th and 17th centuries.

Derived from the Latin word for “vanity”, this kind of art was not about having an over-inflated belief in oneself as the word might suggest. Instead, these still-life paintings, largely by Dutch artists, would often have symbols and images that indicate the transience and fragility of life, such as snuffed-out candles with wisps of smoke, or fragile soap bubbles blown from a pipe.

A large screen showing a bubble in a field of blue

The real bubbles in Nakayama’s artworks always stay spherical thanks to their strong surface tension, thereby displaying – in her mind – a human-like mixture of strength and vulnerability. It’s not quite the same as the fragility of the Vanitas paintings, but for Nakayama – who acknowledges that she’s not a scientist – her works are all about creating “a visual conversation between an artist and science”.

Asked about her future directions in art, however, Nakayama’s response makes immediate sense to any scientist. “Finding universal forms of natural phenomena in paintings is a joy and discovery for me,” she says. “I would be happy to continue to learn about the physics and science that make up this world, and to use visual expression to say ‘the world is beautiful’.”

The post Akiko Nakayama: the Japanese artist skilled in fluid mechanics appeared first on Physics World.

]]>
Feature Sidney Perkowitz explores the science behind the work of Japanese painter Akiko Nakayama https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Perkowitz-Nagayama-EternalArt.jpg newsletter
Open problem in quantum entanglement theory solved after nearly 25 years https://physicsworld.com/a/open-problem-in-quantum-entanglement-theory-solved-after-nearly-25-years/ Tue, 03 Sep 2024 08:30:44 +0000 https://physicsworld.com/?p=116537 Non-existence of universal maximally entangled isospectral mixed states has implications for research on quantum technologies

The post Open problem in quantum entanglement theory solved after nearly 25 years appeared first on Physics World.

]]>
A quarter of a century after it was first posed, a fundamental question about the nature of quantum entanglement finally has an answer – and that answer is “no”. In a groundbreaking study, Julio I de Vicente from the Universidad Carlos III de Madrid, Spain showed that so-called maximally entangled mixed states for a fixed spectrum do not always exist, challenging long-standing assumptions in quantum information theory in a way that has broad implications for quantum technologies.

Since the turn of the millennium, the Institute for Quantum Optics and Quantum Information (IQOQI) in Vienna, Austria, has maintained a conspicuous list of open problems in the quantum world. Number 5 on this list asks: “Is it true that for arbitrary entanglement monotones one gets the same maximally entangled states among all density operators of two qubits with the same spectrum?” In simpler terms, this question is essentially asking whether a quantum system can maintain its maximally entangled state in a realistic scenario, where noise is present.

This question particularly suited de Vicente, who has long been fascinated by foundational issues in quantum theory and is drawn to solving well-defined mathematical problems. Previous research had suggested that such a maximally entangled mixed state might exist for systems of two qubits (quantum bits), thereby maximizing multiple entanglement measures. In a study published in Physical Review Letters, however, de Vicente concludes otherwise, demonstrating that for certain rank-2 mixed states, no state can universally maximize all entanglement measures across all states with the same spectrum.

“I had tried other approaches to this problem that turned out not to work,” de Vicente tells Physics World. “However, once I came up with this idea, it was very quick to see that this gave the solution. I can say that I felt very excited seeing that such a relatively simple argument could be used to answer this question.”

Importance of entanglement

Mathematics aside, what does this result mean for real-world applications and for physics? Well, entanglement is a unique quantum phenomenon with no classical counterpart, and it is essential for various quantum technologies. Since our present experimental reach is limited to a restricted set of quantum operations, entanglement is also a resource, and a maximally entangled state (meaning one that maximizes all measures of entanglement) is an especially valuable resource.

One example of a maximally entangled state is a Bell state, which is one of four possible states for a system of two qubits that are each in a superposition of 0 and 1. Bell states are pure states, meaning that they can, in principle, be known with complete precision. This doesn’t necessarily mean they have definite values for properties like energy and momentum, but it distinguishes them from a statistical mixture of different pure states.

Maximally entangled mixed states

The concept of maximally entangled mixed states (MEMS) is a departure from the traditional view of entanglement, which has been primarily associated with pure states. Conceptually, when we talk about a pure state, we imagine a scenario where a device consistently produces the same quantum state through a specific preparation process. However, practical scenarios often involve mixed states due to noise and other factors.

In effect, MEMS are a bridge between theoretical models and practical applications, offering robust entanglement even in less-than-ideal conditions. This makes them particularly valuable for technologies like quantum encryption and quantum computing, where maintaining entanglement is crucial for performance.

What next?

de Vicente’s result relies on an entanglement measure that is constructed ad hoc and has no clear operational meaning. A more relevant version of this result for applications, he says, would be to “identify specific quantum information protocols where the optimal state for a given level of noise is indeed different”.

While de Vicente’s finding addresses an existing question, it also introduces several new ones, such as the conditions needed to simultaneously optimize various entanglement measures within a system. It also raises the possibility of investigating whether de Vicente’s theorems hold under other notions of “the same level of noise”, particularly if these arise in well-defined practical contexts.

The implications of this research extend beyond theoretical physics. By enabling better control and manipulation of quantum states, MEMS could revolutionize how we approach problems in quantum mechanics, from computing to material science. Now that we understand their limitations better, researchers are poised to explore their potential applications, including their role in developing quantum technologies that are robust, scalable, and practical.

The post Open problem in quantum entanglement theory solved after nearly 25 years appeared first on Physics World.

]]>
Research update Non-existence of universal maximally entangled isospectral mixed states has implications for research on quantum technologies https://physicsworld.com/wp-content/uploads/2024/09/entanglement_4132506_iStock_Kngkyle21.jpg newsletter1
Metasurface makes thermal sources emit laser-like light https://physicsworld.com/a/metasurface-makes-thermal-sources-emit-laser-like-light/ Mon, 02 Sep 2024 10:41:54 +0000 https://physicsworld.com/?p=116527 Pillar-studded surface just hundreds of nanometres thick allows researchers to control direction, polarization and phase of thermal radiation

The post Metasurface makes thermal sources emit laser-like light appeared first on Physics World.

]]>
Incandescent light bulbs and other thermal radiation sources can produce coherent, polarized and directed emissions with the help of a structured thin film known as a metasurface. Created by Andrea Alù and colleagues at the City University of New York (CUNY), US, the new metasurface uses a periodic structure with tailored local perturbations to transform ordinary thermal emissions into something more like a laser beam – an achievement heralded as “just the beginning” for thermal radiation control.

Scientists have previously shown that metasurfaces can perform tasks such as wavefront shaping, beam steering, focusing and vortex beam generation that normally require bulky traditional optics. However, these metasurfaces only work with the highly coherent light typically emitted by lasers. “There is a lot of hype around compactifying optical devices using metasurfaces,” says Alù, the founding director of CUNY’s Photonics Initiative. “But people tend to forget that we still need a bulky laser that is exciting them.”

Unlike lasers, most light sources – including LEDs as well as incandescent bulbs and the Sun – produce light that is highly incoherent and unpolarized, with spectra and propagation directions that are hard to control. While it is possible to make thermal emissions coherent, doing so requires special silicon carbide materials, and the emitted light has several shortcomings. Notably, a device designed to emit light to the right will also emit it to the left – a fundamental symmetry known as reciprocity.

Some researchers have argued that reciprocity fundamentally limits how asymmetric the wavefront emitted from such structures can be. However, in 2021 members of Alù’s group showed theoretically that a metasurface could produce coherent thermal emission for any polarization, travelling in any direction, without relying on special materials. “We found that the reciprocity constraint could be overcome with a sufficiently complicated geometry,” Alù says.

Smart workarounds

The team’s design incorporated two basic elements. The first is a periodic array that interacts with the light in a highly non-local way, creating a long-range coupling that forces the random oscillations of thermal emission to become coherent across long time scales and distances. The second element is a set of tailored local perturbations to this periodic structure that make it possible to break the symmetry in emission direction.

The only problem was that this structure proved devilishly difficult to construct, as it would have required aligning two independent nanostructured arrays within a 10 nm tolerance. In the latest work, which is described in Nature Nanotechnology, Alù and colleagues found a way around this by backing one structured film with a thin layer of gold. This metallic backing effectively creates an image of the structure, which breaks the vertical symmetry as needed to realize the effect. “We were surprised this worked,” Alù says.

The final structure was made from silicon and structured as an array of rectangular pillars (for the non-local interactions) interspersed with elliptical pillars (for the asymmetric emission). Using this structure, the team demonstrated coherent directed emission for six different polarizations, at frequencies of their choice. They also used it to send circularly polarized light in arbitrary directions, and to split thermal emissions into orthogonally polarized components travelling in different directions. While this so-called photonic Rashba effect has been demonstrated before in circularly polarized light, the new thermal metasurface produces the same effect for arbitrary polarizations – something not previously thought possible.

According to Alù, the new metasurface offers “interesting opportunities” for lighting, imaging, and thermal emission management and control, as well as thermal camouflaging. George Alexandropoulos, who studies metasurfaces for informatics and telecommunication at the National and Kapodistrian University of Athens, Greece but was not involved in the work, agrees. “Metasurfaces controlling thermal radiation could direct thermal emission to energy-harvesting wireless devices,” he says.

Riccardo Sapienza, a physicist at Imperial College London, UK, who also studies metamaterials and was also not involved in this research, agrees that communication could benefit and suggests that infrared sensing could, too. “This is a very exciting result which brings closer the dream of complete control of thermal radiation,” he says. “I am sure this is just the beginning.”

The post Metasurface makes thermal sources emit laser-like light appeared first on Physics World.

]]>
Research update Pillar-studded surface just hundreds of nanometres thick allows researchers to control direction, polarization and phase of thermal radiation https://physicsworld.com/wp-content/uploads/2024/09/02-09-2024-Thermal-metasurface-artwork.png newsletter1
Researchers cut to the chase on the physics of paper cuts https://physicsworld.com/a/researchers-cut-to-the-chase-on-the-physics-of-paper-cuts/ Sun, 01 Sep 2024 09:00:22 +0000 https://physicsworld.com/?p=116517 A paper cut “sweet spot” just happens to be close to the thickness of paper in print magazines

The post Researchers cut to the chase on the physics of paper cuts appeared first on Physics World.

]]>
If you have ever been on the receiving end of a paper cut, you will know how painful they can be.

Kaare Jensen from the Technical University of Denmark (DTU), however, has found intrigue in this bloody occurrence. “I’m always surprised that thin blades, like lens or filter paper, don’t cut well, which is unexpected because we usually consider thin blades to be efficient,” Jensen told Physics World.

To find out why paper is so successful at cutting skin, Jensen and fellow DTU colleagues carried out over 50 experiments with a range of paper thicknesses to make incisions into a piece of gelatine at various angles.

Through these experiments and modelling, they discovered that paper cuts are a competition between slicing and “buckling”. Thin paper with a thickness of about 30 microns, or 0.03 mm, doesn’t cut so well because it buckles – a mechanical instability that happens when a slender object like paper is compressed. Once this occurs, the paper can no longer transfer force to the tissue, so is unable to cut.

Thick paper, with a thickness greater than around 200 microns, is also ineffective at making an incision. This is because it distributes the load over a greater area, resulting in only small indentations.

The team found, however, a paper cut “sweet spot” at around 65 microns and when the incision was made at an angle of about 20 degrees from the surface. This paper thickness just happens to be close to that of the paper used in print magazines, which goes some way to explain why it annoyingly happens so often.

Using the results from the work, the researchers created a 3D-printed scalpel that uses scrap paper for the cutting edge. Using this so-called “papermachete” they were able to slice through apple, banana peel, cucumber and even chicken.

Jensen notes that the findings are interesting for two reasons. “First, it’s a new case of soft-on-soft interactions where the deformation of two objects intertwines in a non-trivial way,” he says. “Traditional metal knives are much stiffer than biological tissues, while paper is still stiffer than skin but around 100 times weaker than steel.”

The second is that it is a “great way” to teach students about forces given that the experiments are straightforward to do in the classroom. “Studying the physics of paper cuts has revealed a surprising potential use for paper in the digital age: not as a means of information dissemination and storage, but rather as a tool of destruction,” the researchers write.

The post Researchers cut to the chase on the physics of paper cuts appeared first on Physics World.

]]>
Blog A paper cut “sweet spot” just happens to be close to the thickness of paper in print magazines https://physicsworld.com/wp-content/uploads/2024/08/30-08-24-papermachete2-small.jpg newsletter
LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs https://physicsworld.com/a/lux-zeplin-puts-new-limit-on-dark-matter-mass/ Sat, 31 Aug 2024 13:09:34 +0000 https://physicsworld.com/?p=116521 Announcement makes us pine for the Black Hills

The post LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs appeared first on Physics World.

]]>
This article has been updated to correct a misinterpretation of this null result.  

Things can go a bit off-topic at Physics World and recent news about dark matter got us talking about the beauty of the Black Hills of South Dakota. This region of forest and rugged topography is smack dab in the middle of the Great Plains of North America and is most famous for the giant sculpture of four US presidents at Mount Rushmore.

A colleague from Kansas fondly recalled a family holiday in the Black Hills – and as an avid skier, I was pleased to learn that the region is home to the highest ski lift between the Alps and the Rockies.

The Black Hills also have a special place in the hearts of physicists – especially those who are interested in dark matter and neutrinos. The region is home to the Sanford Underground Research Facility, which is located 1300 m below the hills in a former gold mine. It was there that Ray Davis and colleagues first detected neutrinos from the Sun, for which Davis shared the 2002 Nobel Prize for Physics.

Today, the huge facility is home to nearly 30 experiments that benefit from the mine’s low background radiation. One of the biggest experiments is LUX–ZEPLIN, which is searching for dark-matter particles.

Hypothetical substance

Dark matter is a hypothetical substance that is invoked to explain the dynamics of galaxies, the large-scale structure of the cosmos, and more. While dark matter is believed to account for 85% of mass in the universe, physicists have little understanding of what it is – or indeed if it actually exists.

So far, the best that experiments like LUX–ZEPLIN have done is to tell physicists what dark matter isn’t. Now, the latest result from LUX–ZEPLIN places the best-ever limits on the nature of dark-matter particles called WIMPs.

The measurement involved watching several tonnes of liquid xenon for 280 days, looking for flashes of light that would be created when a WIMP collides with a xenon nuclei. However no evidence was seen for collisions with WIMPs heavier than 9 GeV/c2 – which is about 10 times the mass of the proton.

The team says that the result is “nearly five times better” than previous WIMP searches. “These are new world-leading constraints by a sizable margin on dark matter and WIMPs,” explains Chamkaur Ghag, who speaks for the LUX–ZEPLIN team and is based at University College London.

Digging for treasure

“If you think of the search for dark matter like looking for buried treasure, we’ve dug almost five times deeper than anyone else has in the past,” says Scott Kravitz of the University of Texas at Austin who is the deputy physics coordinator for the experiment.

This will not be the last that we hear from LUX–ZEPLIN, which will collect a total of 1000 days of data before it switches off in 2028. And it’s not only dark matter that the experiment is looking for. Because it is in a low background environment, LUX–ZEPLIN is also being used to search for other rare or hypothetical events such as the radioactive decay of xenon, neutrinoless double beta decay and neutrinos from the beta decay of boron nuclei in the Sun.

LUX–ZEPLIN is not the only experiment at Sanford that is looking for neutrinos. The Deep Underground Neutrino Experiment (DUNE) is currently under construction at the lab and is expected to be completed in 2028. DUNE will detect neutrinos in four huge tanks that will each be filled with 17,000 tonnes of liquid argon. Some neutrinos will be beamed from 1300 km away at Fermilab near Chicago and together the facilities will comprise the Long-Baseline Neutrino Facility.

One aim of the facility is to study the flavour oscillation of neutrinos as they travel over long distances. This could help explain why there is much more matter than antimatter in the universe. By detecting neutrinos from exploding stars, DUNE could also shed light on the nuclear processes that occur during supernovae. And, it might even detect the radioactive decay of the proton, a hypothetical process that could point to physics beyond the Standard Model.

The post LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs appeared first on Physics World.

]]>
Blog Announcement makes us pine for the Black Hills https://physicsworld.com/wp-content/uploads/2024/08/31-8-24-Sanford-vista.jpg newsletter
Gold nanoparticles could improve radiotherapy of pancreatic cancer https://physicsworld.com/a/gold-nanoparticles-could-improve-radiotherapy-of-pancreatic-cancer/ Fri, 30 Aug 2024 09:49:16 +0000 https://physicsworld.com/?p=116502 Irradiating tumours containing gold nanoparticles should enhance radiotherapy effectiveness while minimizing potential side effects

The post Gold nanoparticles could improve radiotherapy of pancreatic cancer appeared first on Physics World.

]]>
Dose distributions for pancreatic radiotherapy

The primary goal of radiotherapy is to effectively destroy the tumour while minimizing side effects to nearby normal tissues. Focusing on the challenging case of pancreatic cancer, a research team headed up at Toronto Metropolitan University in Canada has demonstrated that gold nanoparticles (GNPs) show potential to optimize this fine balance between tumour control probability (TCP) and normal tissue complication probability (NTCP).

GNPs are under scrutiny as candidates for improving the effectiveness of radiation therapy by enhancing dose deposition within the tumour. The dose enhancement observed when irradiating GNP-infused tumour tissue is mainly due to the Auger effect, in which secondary electrons generated within the nanoparticles can damage cancer cells.

“Nanoparticles like GNPs could be delivered to the tumour using targeting agents such as [the cancer drug] cetuximab, which can specifically bind to the epidermal growth factor receptor expressed on pancreatic cancer cells, ensuring a high concentration of GNPs in the tumour site,” says first author Navid Khaledi, now at CancerCare Manitoba.

This increased localized energy deposition should improve tumour control; but it’s also crucial to consider possible toxicity to normal tissues due to the presence of GNPs. To investigate this further, Khaledi and colleagues simulated treatment plans for five pancreatic cancer cases, using CT images from the Cancer Imaging Archive database.

Plan comparison

For each case, the team compared plans generated using a 2.5 MV photon beam in the presence of GNPs with conventional 6 MV plans. “We chose a 2.5 MV beam due to the enhanced photoelectric effect at this energy, which increases the interaction probability between the beam and the GNPs,” Khaledi explains.

The researchers created the treatment plans using the MATLAB-based planning program matRad. They first determined the dose enhancement conferred by 50-nm diameter GNPs by calculating the relative biological effectiveness (RBE, the ratio of dose without to dose with GNPs for equal biological effects) using custom MATLAB codes. The average RBE for the 2.5 MV beam, using α and β radiosensitivity values for pancreatic tumour, was 1.19. They then applied RBE values to each tumour voxel to calculate dose distributions and TCP and NTCP values.

The team considered four treatment scenarios, based on a prescribed dose of 40 Gy in five fractions: 2.5 MV plus GNPs, designed to increase TCP (using the prescribed dose, but delivering an RBE-weighted dose of 40 Gy x 1.19); 2.5 MV plus GNPs, designed to reduce NTCP (lowering the prescribed dose to deliver an RBE-weighted dose of 40 Gy); 6 MV using the prescribed dose; and 6 MV with the prescribed dose increased to 47.6 Gy (40 Gy x 1.19).

The analysis showed that the presence of GNPs significantly increased TCP values, from around 59% for the standard 6 MV plans to 93.5% for the 2.5 MV plus GNPs (increased TCP) plans. Importantly, the GNPs helped to maintain low NTCP values of below 1%, minimizing the risk of complications in normal tissues. Using a conventional 6 MV beam with an increased dose also resulted in high TCP values, but at the cost of raising NTCP to 27.8% in some cases.

Minimizing risks

The team next assessed the dose to the duodenum, the main dose-limiting organ for pancreatic radiotherapy. The mean dose to the duodenum was highest for the increased-dose 6 MV photon beam, and lowest for the 2.5 MV plus GNPs plans. Similarly, D2%, the maximum dose received by 2% of the volume, was highest with the increased-dose 6 MV beam, and lowest with 2.5 MV plus GNPs.

It’s equally important to consider dose to the liver and kidney, as these organs may also uptake GNPs. The analysis revealed relatively low doses to the liver and left kidney for all treatment options, with mean dose and D2% generally below clinically significant thresholds. The highest mean doses to the liver and left kidney for 2.5 MV plus GNPs were 3.3 and 7.7 Gy, respectively, compared with 2.3 and 8 Gy for standard 6 MV photons.

The researchers conclude that the use of GNPs in radiation therapy has potential to significantly improve treatment outcomes and benefit cancer patients. Khaledi notes, however, that although GNPs have shown promise in preclinical studies and animal models, they have not yet been tested for radiotherapy enhancement in human subjects.

Next, the team plans to investigate new linac targets that could potentially enable therapeutic applications. “One limitation of the current 2.5 MV beam is its low dose rate (60 MU/min) on TrueBeam linacs, primarily due to the copper target’s heat tolerance,” Khaledi tells Physics World. “Increasing the dose rate could make the beam clinically useful, but it risks melting the copper target. Future work will evaluate the beam spectrum for different target designs and materials.”

The researchers report their findings in Physics in Medicine & Biology.

The post Gold nanoparticles could improve radiotherapy of pancreatic cancer appeared first on Physics World.

]]>
Research update Irradiating tumours containing gold nanoparticles should enhance radiotherapy effectiveness while minimizing potential side effects https://physicsworld.com/wp-content/uploads/2024/08/30-08-24-PMB-GNP-featured.jpg newsletter1
The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? https://physicsworld.com/a/the-wow-signal-did-a-telescope-in-ohio-receive-an-extraterrestrial-communication-in-1977/ Thu, 29 Aug 2024 14:37:42 +0000 https://physicsworld.com/?p=116495 This podcast features an astrobiologist who has identified similar radio signals

The post The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? appeared first on Physics World.

]]>
On 15 August 1977 the Big Ear radio telescope in the US was scanning the skies in a search for signs of intelligent extraterrestrial life. Suddenly, it detected a strong, narrow bandwidth signal that lasted a little longer than one minute – as expected if Big Ear’s field of vision swept across a steady source of radio waves. That source, however, had vanished 24 hours later when the Ohio-based telescope looked at the same patch of sky.

This was the sort of technosignature that searches for extraterrestrial intelligence (SETI) were seeking. Indeed, one scientist wrote the word “Wow!” next to the signal on a paper print-out of the Big Ear data.

Ever since, the origins of the Wow! signal have been debated – and now, a trio of scientists have an astrophysical explanation that does not involve intelligent extraterrestrials. One of them, Abel Méndez, is our guest in this episode of the Physics World Weekly podcast.

Méndez is an astrobiologist at the University of Puerto Rico at Arecibo and he explains how observations made at the Arecibo Telescope have contributed to the trio’s research.

  • Abel Méndez, Kevin Ortiz Ceballos and Jorge I Zuluaga describe their research in a preprint on arXiv.

The post The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? appeared first on Physics World.

]]>
Podcasts This podcast features an astrobiologist who has identified similar radio signals https://physicsworld.com/wp-content/uploads/2024/08/29-8-24-Wow-signal.jpg newsletter
Heavy exotic antinucleus gives up no secrets about antimatter asymmetry https://physicsworld.com/a/heavy-exotic-antinucleus-gives-up-no-secrets-about-antimatter-asymmetry/ Thu, 29 Aug 2024 13:08:31 +0000 https://physicsworld.com/?p=116491 Antihyperhydrogen-4 is observed by the Star Collaboration

The post Heavy exotic antinucleus gives up no secrets about antimatter asymmetry appeared first on Physics World.

]]>
An antihyperhydrogen-4 nucleus – the heaviest antinucleus ever produced – has been observed in heavy ion collisions by the STAR Collaboration at Brookhaven National Laboratory in the US. The antihypernucleus contains a strange quark, making it a heavier cousin of antihydrogen-4. Physicists hope that studying such antimatter particles could shed light on why there is much more matter than antimatter in the visible universe – however in this case, nothing new beyond the Standard Model of particle physics was observed.

In the first millionth of a second after the Big Bang, the universe is thought to have been too hot for quarks to have been bound into hadrons. Instead it comprised a strongly interacting fluid called a quark–gluon plasma. As the universe expanded and cooled, bound baryons and mesons were created.

The Standard Model forbids the creation of matter without the simultaneous creation of antimatter, and yet the universe appears to be made entirely of matter. While antimatter is created by nuclear processes – both naturally and in experiments – it is swiftly annihilated on contact with matter.

The Standard Model also says that matter and antimatter should be identical after charge, parity and time are reversed. Therefore, finding even tiny asymmetries in how matter and antimatter behave could provide important information about physics beyond the Standard Model.

Colliding heavy ions

One way forward is to create quark–gluon plasma in the laboratory and study particle–antiparticle creation. Quark–gluon plasma is made by smashing together heavy ions such as lead or gold. A variety of exotic particles and antiparticles emerge from these collisions. Many of them decay almost immediately, but their decay products can be detected and compared with theoretical predictions.

Quark–gluon plasma can include hypernuclei, which are nuclei containing one or more hyperons. Hyperons are baryons containing one or more strange quarks, making hyperons the heavier cousins of protons and neutrons. These hypernuclei are thought to have been present in the high-energy conditions of the early universe, so physicists are keen to see if they exhibit any matter/antimatter asymmetries.

In 2010, the STAR collaboration unveiled the first evidence of an antihypernucleus, which was created by smashing gold nuclei together at 200 GeV. This was the antihypertriton, which is the antimatter version of an exotic counterpart to tritium in which one of the down quarks in one of the neutrons is replaced by a strange quark.

Now, STAR physicists have created a heavier antihypernucleus. They recorded over 6 billion collisions using pairs of uranium, ruthenium, zirconium and gold ions moving at more than 99.9% of the speed of light. In the resulting quark–gluon plasma, the researchers found evidence of antihyperhydrogen-4 (antihypertriton with an extra antineutron). Antihyperhydrogen-4 decays almost immediately by the emission of a pion, producing antihelium-4. This was detected by the researchers in 2011. The researchers therefore knew what to look for among the debris of their collisions.

Sifting through the collisions

Sifting through the collision data, the researchers found 22 events that appeared to be antihyperhydrogen-4 decays. After subtracting the expected background, they were left with approximately 16 events, which was statistically significant enough to claim that they had observed antihyperhydrogen-4.

The researchers also observed evidence of the decays of hyperhydrogen-4, antihypertriton and hypertriton. In all cases, the results were consistent with the predictions of charge–parity–time (CPT) symmetry. This is a central tenet of modern physics that says that if the charge and internal quantum numbers of a particle are reversed, the spatial co-ordinates are reversed and the direction of time is reversed, the outcome of an experiment will be identical.

STAR member Hao Qiu of the Institute of Modern Physics at the Chinese Academy of Sciences says that, in his view, the most important feature of the work is the observation of the hyperhydrogen-4. “In terms of the CPT test, it’s just that we’re able to do it…The uncertainty is not very small compared with some other tests.”

Qiu says that he, personally, hopes the latest research may provide some insight into violation of charge–parity symmetry (i.e. without flipping the direction of time). This has already been shown to occur in some systems. “Ultimately, though, we’re experimentalists – we look at all approaches as hard as we can,” he says; “but if we see CPT symmetry breaking we have to throw out an awful lot of current physics.”

“I really do think it’s an incredibly impressive bit of experimental science,” says theoretical nuclear physicist Thomas Cohen of University of Maryland, College Park; “The idea that they make thousands of particles each collision, find one of these in only a tiny fraction of these events, and yet they’re able to identify this in all this really complicated background – truly amazing!”

He notes, however, that “this is not the place to look for CPT violation…Making precision measurements on the positron mass versus the electron mass or that of the proton versus the antiproton is a much more promising direction simply because we have so many more of them that we can actually do precision measurements.”    

The research is described in Nature.

The post Heavy exotic antinucleus gives up no secrets about antimatter asymmetry appeared first on Physics World.

]]>
Research update Antihyperhydrogen-4 is observed by the Star Collaboration https://physicsworld.com/wp-content/uploads/2024/08/29-8-24-antihypernucleus.jpg newsletter1
Metamaterial gives induction heating a boost for industrial processing https://physicsworld.com/a/metamaterial-gives-induction-heating-a-boost-for-industrial-processing/ Wed, 28 Aug 2024 15:05:27 +0000 https://physicsworld.com/?p=116469 Technology could help heavy industry transition from fossil fuels

The post Metamaterial gives induction heating a boost for industrial processing appeared first on Physics World.

]]>
A thermochemical reactor powered entirely by electricity has been unveiled by Jonathan Fan and colleagues at Stanford University. The experimental reactor was used to convert carbon dioxide into carbon monoxide with close to 90% efficiency. This makes it a promising development in the campaign to reduce carbon dioxide emissions from industrial processes that usually rely on fossil fuels.

Industrial processes account for a huge proportion of carbon emissions worldwide – accounting for roughly a third of carbon emissions in the US, for example. In part, this is because many industrial processes require huge amounts of heat, which can only be delivered by burning fossil fuels. To address this problem, a growing number of studies are exploring how combustion could be replaced with electrical sources of heat.

“There are a number of ways to use electricity to generate heat, such as through microwaves or plasma,” Fan explains. “In our research, we focus on induction heating, owing to its potential for supporting volumetric heating at high power levels, its ability to scale to large power levels and reactor volumes, and its strong safety record.”

Induction heating uses alternating magnetic fields to induce electric currents in a conductive material, generating heat via the electrical resistance of the material. It is used in a wide range of applications from domestic cooking to melting scrap metal. However, it has been difficult to use induction heating for complex industrial applications.

In its study, Fan’s team focused on using inductive heating in thermochemical reactors, where gases are transformed into valuable products through reactions with catalysts.

Onerous requirements

The heating requirements for these reactors are especially onerous, as Fan explains. “They need to produce heat in a 3D space; they need to feature exceptionally high heat transfer rates from the heat-absorbing material to the catalyst; and the energy efficiency of the process needs to be nearly 100%.”

To satisfy these requirements, the Stanford researchers created a new design for internal reactor structures called baffles. Conventional baffles are used to enhance heat transfer and mixing within a reactor, improving its reaction rates and yields.

In their design, Fan’s team re-reimagined these structures as integral components of the heating process itself. Their new baffles comprised a 3D lattice made from a conductive ceramic, which can be heated via magnetic induction at megahertz frequencies.

“The lattice structure can be modelled as a medium whose electrical conductivity depends on both the material composition of the ceramic and the geometry of the lattice,” Fan explains. “Therefore, it can be conceptualized as a metamaterial, whose physical properties can be tailored via their geometric structuring.”

 Encouraging heat transfer

This innovative design addressed three key requirements of a thermochemical reactor. First, by occupying the entire reactor volume, it ensures uniform 3D heating. Second, the metamaterial’s large surface area encourages heat transfer between the lattice and the catalyst. Finally, the combination of the high induction frequency and low electrical conductivity in the lattice delivers high energy efficiency.

To demonstrate these advantages, Fan says, “we tailored the metamaterial reactor for the ‘reverse water gas shift’ reaction, which converts carbon dioxide into carbon monoxide – a useful chemical for the synthesis of sustainable fuels”.

To boost the efficiency of the conversion, the team used a carbonate-based catalyst to minimize unwanted side reactions. A silicon carbide foam lattice baffle and a novel megahertz-frequency power amplifier were also used.

As Fan explains, initial experiments with the reactor yielded very promising results. “These demonstrations indicate that our reactor operates with electricity to internal heat conversion efficiencies of nearly 90%,” he says.

The team hopes that its design offers a promising step towards electrically powered thermochemical reactors that are suited for a wide range of useful chemical processes.

“Our concept could not only decarbonize the powering of chemical reactors but also make them smaller and simpler,” Fan says. “We have also found that as our reactor concept is scaled up, its energy efficiency increases. These implications are important, as economics and ease of implementation will dictate how quickly decarbonized reactor technologies could translate to real-world practice.”

The research is described in Joule.

The post Metamaterial gives induction heating a boost for industrial processing appeared first on Physics World.

]]>
Research update Technology could help heavy industry transition from fossil fuels https://physicsworld.com/wp-content/uploads/2024/08/28-08-24-metamaterial-reactor.jpg newsletter1
‘Kink states’ regulate the flow of electrons in graphene https://physicsworld.com/a/kink-states-regulate-the-flow-of-electrons-in-graphene/ Wed, 28 Aug 2024 12:00:11 +0000 https://physicsworld.com/?p=116456 New valleytronics-based switch could have applications in quantum networks

The post ‘Kink states’ regulate the flow of electrons in graphene appeared first on Physics World.

]]>
A new type of switch sends electrons propagating in opposite directions along the same paths – without ever colliding with each other. The switch works by controlling the presence of so-called topological kink states in a material known as Bernal bilayer graphene, and its developers at Penn State University in the US say that it could lead to better ways of transmitting quantum information.

Bernal bilayer graphene consists of two atomically-thin sheets of carbon stacked on top of each other and shifted slightly. This arrangement gives rise to several unusual electronic behaviours. One such behaviour, known as the quantum valley Hall effect, gets its name from the dips or “valleys” that appear in graphs of an electron’s energy relative to its momentum. Because graphene’s conduction and valence bands meet at discrete points (known as Dirac points), it has two such valleys. In the quantum valley Hall effect, the electrons in these different valleys flow in opposite directions. Hence, by manipulating the population of the valleys, researchers can alter the flow of electrons through the material.

This process of controlling the flow of electrons via their valley degree of freedom is termed “valleytronics” by analogy with spintronics, which uses the internal degree of freedom of electron spin to store and manipulate bits of information. For valleytronics to be effective, however, the materials the electrons flow through need to be of very high quality. This is because any atomic defects can produce intervalley backscattering, which causes electrons travelling in opposite directions to collide with each other.

A graphite/hBN global gate

Researchers led by Penn State physicist Jun Zhu have now succeeded in producing a device that is pristine enough to support such behaviour. They did this by incorporating a stack made from graphite and a two-dimensional material called hexagonal boron nitride (hBN) into their design. This stack, which acts as a global “gate” that allows electrons to flow through the device, is free of impurities, and team member Ke Huang explains that it was key to the team’s technical advance.

The principle behind the improvement is that while graphite is an excellent electrical conductor, hBN is an insulator. By combining the two materials, Zhu, Huang and colleagues created a structure known as a topological insulator – a material that conducts electricity very well along its edges or surfaces while acting as an insulator in its bulk. Within the edge states of such a topological insulator, electrons can only travel along one pathway. This means that, unlike in a normal conductor, they do not experience backscatter. This remarkable behaviour allows topological insulators to carry electrical current with near-zero dissipation.

In the present work, which is described in Science, the researchers confined electrons to special, topologically protected electrically conducting pathways known as kink states that formed by electrically gating the stack. By controlling the presence or absence of these states, they showed that they could regulate the flow of electrons in the system.

A quantized resistance value

“The amazing thing about our devices is that we can make electrons moving in opposite directions not collide with one another even though they share the same pathways,” Huang says. “This corresponds to the observation of a quantized resistance value, which is key to the potential application of the kink states as quantum wires to transmit quantum information.”

Importantly, this quantization of the kink states persists even when the researchers increased the temperature of the system from near absolute zero to 50 K. Zhu describes this as surprising because quantum states are fragile, and often only exist at temperatures of a few Kelvin. Operation at elevated temperatures will, of course, be important for real-world applications, she adds.

The new switch is the latest addition to a group of kink state-based quantum electronic devices the team has already built. These include valves, waveguides and beamsplitters. While the researchers admit that they have a long way to go before they can assemble these components into a fully functioning quantum interconnect system, they say their current set-up is potentially scalable and can already be programmed to direct current flow. They are now planning to study how electrons behave like coherent waves when travelling along the kink state pathways. “Maintaining quantum coherence is a key requirement for any quantum interconnect,” Zhu tells Physics World.

The post ‘Kink states’ regulate the flow of electrons in graphene appeared first on Physics World.

]]>
Research update New valleytronics-based switch could have applications in quantum networks https://physicsworld.com/wp-content/uploads/2024/08/graphene-web-206705884_Shutterstock_Inozemtsev-Konstantin.jpg newsletter1
A breezy tour of what gaseous materials do for us https://physicsworld.com/a/a-breezy-tour-of-what-gaseous-materials-do-for-us/ Wed, 28 Aug 2024 10:00:50 +0000 https://physicsworld.com/?p=116336 Margaret Harris reviews It’s a Gas: the Magnificent and Elusive Elements that Expand Our World by Mark Miodownik

The post A breezy tour of what gaseous materials do for us appeared first on Physics World.

]]>
A row of gas lamps outside the Louvre in Paris

The first person to use gas for illumination was a French engineer by the name of Philippe Lebon. In 1801 his revolutionary system of methane pipes and jets lit up the Hôtel de Seignelay so brilliantly that ordinary Parisians paid three francs apiece just to marvel at it. Overnight guests may have been less enthusiastic. Although methane itself is colourless and odourless, Lebon’s process for extracting it left the gas heavily contaminated with hydrogen sulphide, which – as Mark Miodownik cheerfully reminds us in his latest book – is a chemical that “smells of farts”.

The often odorous and frequently dangerous world of gases is a fascinating subject for a popular-science book. It’s also a logical one for Miodownik, a materials researcher at University College London, UK, whose previous books were about solids and liquids. The first, Stuff Matters, was a huge critical and commercial success, winning the 2014 Royal Society Winton Prize for science books (and Physics World’s own Book of the Year award) on its way to becoming a New York Times bestseller. The second, Liquid, drew more muted praise, with some critics objecting to a narrative gimmick that shoehorned liquid-related facts into the story of a hypothetical transatlantic flight.

Miodownik writes about the science of substances such as breath, fragrance and wind as well as methane, hydrogen and other gases with precise chemical formulations

Miodownik’s third book It’s a Gas avoids this artificial structure and is all the better for it. It also adopts a very loose definition of “gas”, which leaves Miodownik free to write about the science of substances such as breath, fragrance and wind as well as methane, hydrogen and other gases with precise chemical formulations. The result is a lively, free-associating mixture of personal, scientific and historical anecdotes very reminiscent of Stuff Matters, though inevitably one that feels less exceptional than it did the first time around.

The chapter on breath shows how this mixture works. It begins with a story about the young Miodownik watching a brass band march past. Next, we get an explanation of how air travels through brass instruments. By the end of the chapter, Miodownik has moved on, via Air Jordan sneakers and much else, to pneumatic bicycle tyres and their surprising impact on English genetic diversity.

Though the connection might seem fanciful at first, it seems that after John Dunlop patented his air-filled rubber bicycle tyre in 1888, many people (especially women) were suddenly able to traverse bumpy roads cheaply, comfortably and without assistance. As their horizons expanded, their inclination to marry someone from the same parish plummeted: between 1887 and the early years of the 20th century, marriages of this nature dropped from 77% to 41% of the total.

Miodownik is not the first to make the link between bicycle tyres and longer-distance courtships. (He credits the geneticist Steve Jones for the insight, building on work by the 20th-century geographer P J Parry.) However, his decision to include the tale is a deft one, as it illustrates just how important gases and their associated technologies have been to human history.

Anaesthetics are another good example. Though medical professionals were scandalously slow to accept nitrous oxide, ether and chloroform, these beneficial gases eventually revolutionized surgery, saving millions of patients from the agony of their predecessors. Interestingly, criminals proved far less hide-bound than doctors, swiftly adopting chloroform as a way of subduing victims – though the ever-responsible Miodownik notes that this tactic seldom works as quickly as it does in the movies, and errors in dosage can be fatal.

Not every gas-related invention had such far-reaching effects. Inflatable mattresses never really caught on; as Miodownik observes, “beds were for sleeping and sex, and neither was enhanced by being unexpectedly launched into the air every time your partner made a move”.

The history of balloons is similarly chequered. Around the same time as Lebon was filling the Hôtel de Seignelay with aromas, an early balloonist, Sophie Blanchard, was appointed Napoleon’s “aeronaut of the official festivals”. Though Blanchard went on to hold a similar post under the restored King Louis XVIII, Miodownik notes that her favourite party trick – launching fireworks from a balloon filled with highly flammable and escape-prone hydrogen – eventually caught up with her. In 1819, aged just 41, her firework-festooned craft crashed into the roof of a house and Blanchard fell to her death.

Miodownik brings a pleasingly childlike wonder to his tales of gaseous derring-do

The lessons of this calamity were not learned. More than a century later, 35 passengers and crew on the hydrogen-filled Hindenburg airship (which included a smoking area among its many luxuries) met a similarly fiery end.

Occasional tragedies aside, Miodownik brings a pleasingly childlike wonder to his tales of gaseous derring-do. He often opens chapters with stories from his actual childhood, and while a few of these (like the brass band) are merely cute, others are genuinely jaw-dropping. Some readers may recall that Miodownik began Stuff Matters by describing the time he got stabbed on the London Underground; while there is nothing quite so dramatic in It’s a Gas (and no spoilers in this review), he clearly had an eventful youth.

At times, it becomes almost a game to guess which gas these opening anecdotes will lead to. Though some readers may find the connections a little tenuous, Miodownik is a good enough writer to make his leaps of logic seem effortless even when they are noticeable. The result is a book as delightfully light as its subject matter, and a worthy conclusion to Miodownik’s informal phases-of-matter trilogy – although if he wants to write about plasmas next, I certainly won’t stop him.

  • 2024 Viking 304pp £22.00hb

The post A breezy tour of what gaseous materials do for us appeared first on Physics World.

]]>
Opinion and reviews Margaret Harris reviews It’s a Gas: the Magnificent and Elusive Elements that Expand Our World by Mark Miodownik https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Harris_gaslamp_feature.jpg newsletter
Free-space optical communications with FPGA-based instrumentation https://physicsworld.com/a/free-space-optical-communications-with-fpga-based-instrumentation/ Wed, 28 Aug 2024 09:03:13 +0000 https://physicsworld.com/?p=116418 Join the audience for a live webinar on 2 October 2024 sponsored by Liquid Instruments

The post Free-space optical communications with FPGA-based instrumentation appeared first on Physics World.

]]>

As the world becomes more connected by global communications networks, the field of free-space optical communications has grown as an alternative to traditional data transmission via RF frequencies. While optical communications setups deliver scalability and security advantages along with a smaller infrastructure footprint, they also bring distinct challenges, including attenuation, interference, and beam divergence.

During this presentation, Liquid Instruments will give an overview of the FPGA-based Moku platform, a reconfigurable suite of test and measurement instruments that provide a flexible and efficient approach to optical communications development. You’ll learn how to use the Moku Lock-in Amplifier and Time & Frequency Analyzer for both coherent and direct detection of optical signals, as well as how to frequency-stabilize lasers with the Laser Lock Box.

You’ll also see how to deploy these instruments simultaneously in Multi-instrument Mode for maximum versatility, plus digital and analog modulation methods such as phase-shift keying (PSK) and pulse-position modulation (PPM) covered in a live demo.

A Q&A session will follow the demonstration.

Jason Ball is an engineer at Liquid Instruments, where he focuses on applications in quantum physics, particularly quantum optics, sensing, and computing. He holds a PhD in physics from the Okinawa Institute of Science and Technology and has a comprehensive background in both research and industry, with hands-on experience in quantum computing, spin resonance, microwave/RF experimental techniques, and low-temperature systems.

The post Free-space optical communications with FPGA-based instrumentation appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 2 October 2024 sponsored by Liquid Instruments https://physicsworld.com/wp-content/uploads/2024/08/2024-10-02-webinar-image.jpg
Management insights catalyse scientific success https://physicsworld.com/a/management-insights-catalyse-scientific-success/ Wed, 28 Aug 2024 09:00:38 +0000 https://physicsworld.com/?p=114112 Effective management training can equip scientists and engineers with powerful tools to boost the impact of their work, identify opportunities for innovation, and build high-performing teams

The post Management insights catalyse scientific success appeared first on Physics World.

]]>
Most scientific learning is focused on gaining knowledge, both to understand fundamental concepts and to master the intricacies of experimental tools and techniques. But even the most qualified scientists and engineers need other skills to build a successful career, whether they choose to continue in academia, pursue different pathways in the industrial sector, or exploit their technical prowess to create a new commercial enterprise.

“Scientists and engineers can really benefit from devoting just a small amount of time, in the broad scope of their overall professional development, to understanding and implementing some of the ideas from management science,” says Peter Hirst, who originally trained as a physicist at the University of St Andrews in the UK and now leads the executive education programme at MIT’s Sloan School of Management. “Whether you’re running a lab with just a few post-docs, or you have a leadership role in a large organization, a few simple tools can help to drive innovation and creativity while also making your team more effective and efficient.”

MIT Sloan Executive Education, part of the management school, the business school of the Massachusetts Institute of Technology in Cambridge, US, offers more than 100 short courses and programmes covering all aspects of business innovation, personal skills development, and organizational management, many of which can be accessed in different online formats. Delivered by expert faculty who can share their experiences and insights from their own research work, they are designed to introduce frameworks and tools that enable participants to apply key concepts from management science to real-world situations.

Research groups are really a type of enterprise, with people working together to produce clearly defined outputs

Peter Hirst, MIT Sloan School of Management

One obvious example is the process of transforming a novel lab-based technology into a compelling commercial proposition. “Lots of scientists develop intellectual property during their research work, but may not be aware of the opportunities for commercialization,” says Hirst. “Even here at MIT, which is known for its culture of innovation, many researchers don’t realize that educational support is available to help them to understand what’s needed to transfer a new technology into a viable product, or even to become more aware of what might be possible.”

For academic researchers who want to remain focused on the science, Hirst believes that management tools originally developed in the business sector can offer valuable support to help build more effective teams and nurture the talents of diverse individuals. “Research groups are really a type of enterprise, with people working together to produce clearly defined outputs,” he says. “When I was working as a scientist, I really didn’t really think about the human system that was doing that work, but that’s a really important dimension that can contribute to the success or failure of the whole enterprise.”

Modern science also depends on forging successful collaborations between research groups, or between academia and industry, while researchers are under mounting pressure to demonstrate the impact of their work – whether for scientific progress or commercial benefit. “Even if you’re working in academia, it’s really important to understand the contribution that your work is making to the whole value chain,” Hirst comments. “It provides context that helps to guide the work, but it’s also vital for sustainably securing the resources that are needed to pursue the science.”

The training offered by MIT Sloan takes different formats, including short courses and longer programmes that take a deeper dive into key topics. In each case, however, the faculty designs tasks, simulations and projects that allow participants to gain a deeper understanding of key concepts and how they might be exploited in their own workplace. “People believe by seeing, but they learn by doing,” says Hirst. “Our guiding philosophy is that the learning is always more effective if it can be done in the context of real work, real problems, and real challenges.”

Business team

Many of the courses are taught on the MIT campus, offering the opportunity for delegates to discuss key ideas, work together on training tasks, and network with people who have different backgrounds and experience. For those unable to attend in person, the same ethos extends to the two types of online training available through the executive education programme. One stream, developed in response to the Covid pandemic, offers live tutoring through the Zoom platform, while the other provides access to pre-recorded digital programmes that participants complete within a set time window. Some of these self-paced courses adopt a sprint format inspired by the concepts of agile product development, enabling participants to break down a complex challenge or opportunity into a series of smaller questions that can be tackled to reach a more effective solution.

“It’s not just sitting and watching, people really have the opportunity to work with the material and apply what they are learning,” explains Hirst. “In each case we have worked hard with the faculty to figure out how to achieve the same outcomes through a different type of experience, and it’s been great to see how compelling that can be.”

Evidence that the approach is working can be found in the retention rate for the self-paced courses, with more than 90% of participants completing all the modules and assignments. The Zoom-based programmes also remain popular amid the more general post-pandemic return to in-person training, providing greater flexibility for learners in different parts of the world. “We have tried to find the sweet spot between effectiveness and accessibility, and many people who can’t come to campus have told us they find these courses valuable and impactful,” says Hirst. “We have put the choice in the hands of the learners.”

Plenty of scientists and engineers have already taken the opportunity to develop their management capabilities through the courses offered by MIT Sloan, particularly those that have been thrown into leadership positions within a rapidly growing organization. “Perhaps because we’re at MIT, we are already seeing scientists and engineers who recognize the value of engaging with ideas and tools that some people might dismiss as corporate nonsense,” says Hirst. “Generally speaking, they have really great experiences and discover new approaches that they can use in their labs and businesses to improve their own work and that of their teams and organizations.”

For those who may not yet be ready to make the leap into developing their personal management style, Hirst advocates courses that analyse the dynamics of an organization – whether it’s a start-up company, a manufacturing business or a research collaboration. The central idea here is to apply concepts from systems engineering to organizations, and how work gets done by a human system, to improve overall productivity and performance.

One case study that Hirst cites from the biomedical sector is the Broad Institute, a research organization with links to MIT and Harvard that has developed a platform for generating human genomic information. “Originally they were taking months to extract the genomic data from a sample, but they have reduced that to a week by implementing some fairly simple ideas to manage their operational processes,” he says. “It’s a great example of a scientific organization that has used systems-based thinking to transform their business.”

Others may benefit from courses that focus on technology development and product strategy, or an entrepreneurship development programme that immerses participants in the process of creating a successful business from a novel idea or technology. “That programme can be transformational for many people,” Hirst observes. “Most people who come into it with a background in science and engineering are focused on demonstrating the technical superiority of their solution, but one of the big lessons is the importance of understanding the needs of the customer and the value they would derive from implementing the technology.”

For those who are keen to develop their skills in one particular area, MIT Sloan also offers a series of Executive Certificates that enable learners to choose four complementary courses focusing on topics such as strategy and innovation, or technology and operations. Once all four courses in the track have been completed – which can be achieved in just a few weeks as well as over several months or years – participants are awarded an Executive Certificate to demonstrate the commitment they have made to their own personal development.

More information can be found in a digital brochure that provides details all of the courses available through MIT Sloan, while the website for the executive education programme provides an easy way to search for relevant courses and programmes. Hirst also recommends reading the feedback and reviews from previous participants, which appear alongside each course description on the website. “Prospective learners find it really useful to see how people in similar situations, or with similar needs, have described their experience.”

The post Management insights catalyse scientific success appeared first on Physics World.

]]>
Analysis Effective management training can equip scientists and engineers with powerful tools to boost the impact of their work, identify opportunities for innovation, and build high-performing teams https://physicsworld.com/wp-content/uploads/2024/04/Business-woman-sitting-front-laptop-1382663132-shutterstock-SFIO-CRACHO-web.jpg newsletter
Sunflowers ‘dance’ together to share sunlight https://physicsworld.com/a/sunflowers-dance-together-to-share-sunlight/ Tue, 27 Aug 2024 14:34:41 +0000 https://physicsworld.com/?p=116449 Zigzag patterns created by circular motion of growing stems

The post Sunflowers ‘dance’ together to share sunlight appeared first on Physics World.

]]>
Yasmine Meroz

Sunflowers in a field can co-ordinate the circular motions of their growing stems to minimize the amount of shade each plant experiences – a study done in the US and Israel has revealed. By doing a combination of experiments and simulations, a team led by Yasmine Meroz at Tel Aviv University discovered that seemingly random movements within groups of plants can lead to self-organizing patterns that optimize growing conditions.

Unlike animals, plant motion is usually related to growth – which is an irreversible process that defines a plant’s morphology. One movement frequently observed in plants is called circumnutation, which describes repeating, circular motions at the tips of growing plant stems.

“Charles Darwin and his son, Francis, already identified circumnutations in their book, The Power of Movement in Plants, in 1880,” Meroz explains. “While they documented these movements in a number of species, it was not clear whether these have a function. It is only in recent years that some research has started to identify possible roles of circumnutations, such as the ability of roots to circumvent obstacles.”

Understanding self-organization

Circumnutation was not the initial focus of the team’s study. Instead, they sought a deeper understanding of self-organization. This is a process whereby a system that start outs in a disorderly state can gain order through local interactions between its individual components.

In nature, self-organization has been widely studied in groups of animals, including fish, birds, and insects. The coordinated movements of many individuals help animals source food, evade predators, and conserve energy.

But in 2017 a fascinating example of self-organization in plants was discovered by a team of researchers in Argentina. While observing a field of sunflowers growing in dense rows, the team found that the plants’ stems self-organized into zigzag patterns as they grew. This arrangement minimized the shade the sunflowers cast on one another, ensuring each plant received the maximum possible amount of sunlight.

Meroz’s team has now studied this phenomenon in a controlled laboratory environment. “Unlike previous work, we tracked the movement of sunflower crowns during the whole experiment,” Meroz describes. “This is when we found that sunflowers move a lot via circumnutations, and we asked ourselves whether these movements might play a role in the self-organization process.”

To inform the analysis, Meroz’s team considered two key ingredients of self-organization. The first involved local interactions between individual plants – in this case, their ability to adapt their growth to avoid shading each other.

The second ingredient were the random, noisy motions that allow self-organized systems to explore a variety of possible states. This randomness enables plants to adapt to short-term environmental changes while maintaining stability in their growth patterns.

Tweaking noise

For their sunflowers, the researchers predicted that these random motions could be provided by the circumnutations first described by Charles and Francis Darwin. To investigate this idea, they ran simulations of groups of sunflowers based closely on the movements they had observed in the lab. In these simulations, they tweaked the amount of noise generated by circumnutation with a level of control that is not yet possible in real-world experiments.

“By comparing what we saw in the group experiments with our simulation data, we figured out the best balance of these factors,” explains Meroz’s colleague, Orit Peleg at the University of Colorado Boulder. “We also confirmed that real plants balance these factors in a way that leads to near-optimal minimization of shading.”

As expected, the results confirmed that the random movements of individual sunflowers play a vital role in minimizing the amount of shading experienced by each plant.

Peleg believes that their discovery has fascinating implications for our understanding of how plants behave. “It’s a bit surprising because we don’t usually think of random movement as having a purpose,” she says. “Yet, it’s vital for minimizing shading. This finding prompts us to view plants as active matter, with unique constraints imposed by their anchoring and growth-movement coupling.”

The research is described in Physical Review X.

The post Sunflowers ‘dance’ together to share sunlight appeared first on Physics World.

]]>
Research update Zigzag patterns created by circular motion of growing stems https://physicsworld.com/wp-content/uploads/2024/08/27-8-24-sunflower-1143037052-Shutterstock_Mykhailo-Baidala.jpg newsletter1
The most precise timekeeping device ever built https://physicsworld.com/a/the-most-precise-timekeeping-device-ever-built/ Tue, 27 Aug 2024 12:00:18 +0000 https://physicsworld.com/?p=116432 Colorado-based researchers have reduced the systematic uncertainty in their optical lattice clock to a record low. Ali Lezeik explains how they did it

The post The most precise timekeeping device ever built appeared first on Physics World.

]]>
If you want to make a clock, all you need is an oscillation – preferably one that is stable in frequency and precisely determined. Many systems will fit the bill, from the Earth’s rotation to pendulums and crystal oscillators. But if you want the world’s most precise clock, you’ll need to go to the US state of Colorado, where researchers from JILA and the University of Colorado, Boulder have measured the frequency of an optical lattice clock (OLC) with a record-low systematic uncertainty of 8.1 × 10−19 – equivalent to a fraction of a second throughout the age of the universe.

OLCs are atomic clocks that mark the passage of time using an electron that oscillates between two energy levels (the ground state 1S0 and clock state 3P0) in an atom such as strontium. The high frequency and narrow linewidth of this atomic transition makes these clocks orders of magnitude more precise than the atomic clocks used to redefine the second in 1968, which were based on a microwave transition in caesium atoms.

The high precision of OLCs gives them the potential to unlock technologies that can be used to sense quantities such as distances, the Earth’s gravitational field and even atomic properties such as the fine structure constant at extremely small scales. To achieve this precision, however, they must be isolated from external effects that can cause them to “tick” irregularly. This is why the atoms in an OLC are trapped in a lattice formed by laser beams and confined within a vacuum chamber.

An OLC that is isolated entirely from its environment would oscillate at the constant, natural frequency of the atomic transition, with an uncertainty of 0 Hz/Hz. In other words, its frequency would not change. However, in the real world, temperature, magnetic and electric fields, and even the collisional motion of the atoms in the lattice all influence the clock’s oscillations. These parameters therefore need to be very well controlled for the clock to operate at maximum precision.

Controlling blackbody radiation

According to Alexander Aeppli, a PhD student at JILA who was involved in setting the new record, the most detrimental environmental effect on their OLC is blackbody radiation (BBR). All thermal objects – light bulbs, human bodies, the vacuum chamber the atoms are trapped in – emit such radiation, and the electric field of this radiation couples to the atom’s energy levels. This causes a systematic shift that translates to an uncertainty in the clock’s frequency.

To minimize the effects of BBR, Aeppli and colleagues enclosed their entire system, including the vacuum chamber and optics for creating the clock, within a temperature-controlled box equipped with numerous temperature sensors. By running temperature-stabilized liquid around different parts of their experimental apparatus, they stabilized the air temperature and controlled the vacuum system temperature.

This didn’t completely solve the problem, though. BBR shift is the sum of a static component that scales with the fourth power of temperature and a dynamic component that scales with higher powers. Even after limiting the lab’s temperature fluctuations to a few millikelvin per day, the team still needed to carry out a systematic evaluation of the shift due to the dynamic component.

For this, the JILA-Boulder researchers turned to a 2013 study in which physicists in the US and Russia found a correlation between the uncertainty of the BBR shift and the lifetime of an electron occupying a higher-energy state (3D1) in strontium atoms. By measuring the lifetime of this 3D1 state, the team was able to calculate an uncertainty of 7.3 × 10−19 in the BBR shift.

To fully understand the atoms’ response to BBR, Aeppli explains that they also needed to measure the strength of transitions from the clock states. “The dominant transition that is perturbed by BBR radiation is at a relatively long wavelength,” he says. “This wavelength is longer than the spacing between the atoms, meaning that atoms can behave collectively, modifying the physics of this interaction. It took us quite some time to characterize this effect and involved almost a year of measurements to reduce its uncertainty.”

Photo of the vacuum chamber bathed in purple-blue light

Other environmental effects

BBR wasn’t the only environmental effect that needed systematic study. The in-vacuum mirrors used to create the lattice tend to accumulate electric charges, and the resulting stray electric fields produce a systematic DC Stark shift that changes the clock transition frequency. By shielding the mirrors with a copper structure, the researchers reduced these DC Stark shifts to below the 1 × 10−19 uncertainty level.

OLCs are also sensitive to magnetic fields. This is due to the Zeeman effect, which shifts the energy levels of an atom by different amounts in the presence of such fields. The researchers chose the least magnetically sensitive sub-states to operate their clock, but that still leaves a weaker second-order Zeeman shift for them to calibrate. In the latest work, they reached an uncertainty in this second-order Zeeman shift of 0.1 × 10−18, which is a factor of two smaller than previous measurements.

Even the lattice beams themselves cause an unwanted shift in the atoms’ transition frequency. This is known as the light or AC Stark shift, and it is due to the power of the laser beam. The researchers minimized this shift by ramping down the beam power just before starting the clock, but even at such low trapping powers, atoms in the different lattice sites can still interact, and atoms at the same site can collide. These events lead to a tunnelling and a density shift, respectively. While both are rather weak, the team nevertheless investigated their effect on the clock’s uncertainty and constrained them to below the 10−19 level.

How low can you go?

In early 2013, JILA scientists reported a then-record-low systematic uncertainty in their strontium OLC of 6.4 × 10−18. A year later, they managed to reduce this uncertainty by a factor of three, to 2.1 × 10−18. Ten years on, however, progress seems to have slowed: the latest uncertainty record improves on this value by a mere factor of two. Is there an intrinsic lower bound?

“The largest source of systematic uncertainty continues to be the BBR shift since it goes as temperature to the fourth power,” Aeppli says. “Even a small reduction in temperature can significantly reduce the shift uncertainty.”

To go below the 1 × 10−19 level, he explains that it would be advantageous to cool the system to cryogenic temperatures. Indeed, many OLC research groups are using this approach for their next-generation systems. Ultimately, though, while progress on optical clocks might not be quite as fast as it was 20 years ago, Aeppli says there is no obvious “floor”, no fundamental limit to the systematic uncertainty of optical lattice clocks. “There are plenty of clever people working on pushing uncertainty as low as possible,” he says.

The JILA-Boulder team reports its work in Physical Review Letters.

The post The most precise timekeeping device ever built appeared first on Physics World.

]]>
Analysis Colorado-based researchers have reduced the systematic uncertainty in their optical lattice clock to a record low. Ali Lezeik explains how they did it https://physicsworld.com/wp-content/uploads/2024/08/27-08-2024-Precise-optical-clock_Main-image.jpg newsletter
Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist https://physicsworld.com/a/abdus-salam-honouring-the-first-muslim-nobel-prize-winning-scientist/ Tue, 27 Aug 2024 10:00:13 +0000 https://physicsworld.com/?p=116317 Claudia de Rham and Ian Walmsley pay tribute to the contributions of the great theorist Abdus Salam

The post Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist appeared first on Physics World.

]]>

A child prodigy born in a humble village in British India on 29 January 1926, Abdus Salam became one of the world’s greatest theorists who tackled some of the most fundamental questions in physics. He shared the 1979 Nobel Prize for Physics with Sheldon Glashow and Steven Weinberg for unifying the weak and electromagnetic interactions. In doing so, Salam became the first Muslim scholar to win a science-related Nobel prize – and is so far the only Pakistani to achieve that feat.

After moving to the UK in 1946 just before the partition of India, Salam gained a double-first in mathematics and physics from the University of Cambridge and later did a PhD there in quantum electrodynamics. Following a couple of years back home in Pakistan, Salam returned to Cambridge, before spending the bulk of his career at Imperial College, London. He died aged 70 on 21 November 1996, his later life cruelly ravaged by a neurodegenerative disease.

Yet to many people, Salam’s life and contributions to science are not so well known despite his founding of the International Centre for Theoretical Physics (ICTP) in Trieste, Italy, exactly 60 years ago. Upon joining Imperial, he also became the first academic from Asia to hold a full professorship at a UK university. Keen to put Salam in the spotlight ahead of the centenary of Salam’s birth are Claudia de Rham, a theoretical physicist at Imperial, and quantum-optics researcher Ian Walmsley, who is currently provost of the college.

De Rham and Walmsley recently appeared on the Physics World Weekly podcast. An edited version of our conversation appears below.

How would you summarize Abdus Salam’s contributions to science?

CdR: Salam was one of the founders of modern physics. He pioneered the study of symmetries and unification, which helped contribute to the formulation of the Standard Model of particle physics. In 1967 he incorporated the Higgs mechanism – co-discovered by his Imperial colleague Tom Kibble – into electroweak theory, which unifies the electromagnetic and weak forces. It changed the way we see the world by underlining the importance of symmetry and by showing how some forces – which may appear different – are actually linked.

This breakthrough led him to win the 1979 Nobel Prize for Physics with Steven Weinberg and Sheldon Glashow, making him the first – in fact, so far, the only – Nobel laureate from Pakistan. Salam was also the first person from the Islamic world to win a Nobel prize in science and the most recent person from Imperial College to do so, which makes us very proud of him.

How did his connection to Imperial College come about?

CdR: After studying at Cambridge, he went back to Pakistan but realized that the scientific, opportunities there were limited. So he returned to Cambridge for a while, before being appointed a professor of applied mathematics at Imperial in 1957. That made him the first Asian academic to hold a professorship at any UK university. He then moved to the physics department at Imperial and stayed at the college for almost 40 years – for the rest of his life.

Large photo of Abdus Salam at the entrance the main library at Imperial College

For Salam, Imperial was his scientific home. He founded the theoretical physics group here, doing the work on quantum electromagnetics and quantum field theory that led to his Nobel prize. But he also did foundational work on renormalization, grand unification, supersymmetry and so on, making Imperial one of the world’s leading centres for fundamental physics research. Many of his students, like Michael Duff and Ray Rivers, also had an incredible impact in physics, paving the way for how we do quantum field theory today.

What was Salam like as a person?

IW: I had the privilege of meeting Salam when I was an undergraduate here in Imperial’s physics department in 1977. In the initial gathering of new students, he gave a short talk on his work and that of the theoretical physics group and the wider department. I didn’t understand much of what he said, but Salam’s presence was really important for motivating young people to think about – and take on – the hard problems and to get a sense of the kind of problems he was tackling. His enthusiasm was really fantastic for a young student like myself.

When he won the Nobel prize in 1979, I was by then a second-year student and there were a lot of big celebrations and parties in the department. There were a number of other luminaries at Imperial like Kibble, who’d made lots of important contributions. In fact, I think Salam’s group was probably the leading theoretical particle group in the UK and among the best in the world. He set it up and it was fantastic for the department to have someone of his calibre: it was a real boost.

How would you describe Salam’s approach to science?

CdR: Salam thought about science on many different levels. There wasn’t just the unification within science itself, but he saw science as a unifying force. As he showed when he set up the theoretical physics group at Imperial and, later, the ICTP in Trieste, he saw science as something that could bring people from all over the world together.

We’re used to that kind of approach today. But at the time, driving collaboration across the world was revolutionary. Salam wasn’t just an incredible scientist, but an incredible human being. He was eager to champion diversity – recognizing that it’s the best thing not just for science but for humanity too. Salam was ahead of his time in realizing the unifying power of science and being able to foster it throughout the world.

What impact has the ICTP had over the last 60 years?

CdR: The goal of the ICTP has been to combat the isolation and lack of resources that people in some parts of the world, especially the global south, were facing. It’s had a huge impact over the last 60 years and has now grown into a network of five institutions spread over four continents, all of which are devoted to advancing international collaboration and scientific expertise to the non-western world. It hosts around 6000 scientists every year, about 50% of whom are from the global south.

How well known do you think Salam is around the world?

IW: Is he well known in the physics community globally? Absolutely. I also think he is well regarded and known across the Muslim community. But is he well known to the general public as one of the UK’s greatest adopted scientists? Probably not. And I think that’s a shame because his skills as a pedagogue and his concern for people as a whole – and for science as a motivating force – are really important messages and things he really championed.

What activities has Imperial got planned for the centenary of Salam’s birth?

CdR: We want to use the centenary not only to promote and celebrate excellence in fundamental science but also to engage with people form the global south. In fact, we already had a 98th birthday celebration on campus earlier this year, where we renamed the Imperial Central Library, which is now called the Abdus Salam Library. Then there were public talks by various physicists, including the ICTP director Atisha Dabodkar and Tasneem Husain, who is Pakistan’s first female string theorist.

5 people stood in front of the Abdus Salam Library

We also held an exhibition here on campus about many aspects of Salam’s life for school children all around London to come and visit. It’s now moved to a permanent virtual home online. And we held an essay contest for school children from Pakistan to see how Salam has inspired them, selecting a few to go online. We also had a special documentary about Salam filmed called “A unifying force”.

What impact do you think those events have had?

IW: It was really great to name a building after him, especially as it’s the library where students congregate all the time. There’s a giant display on the wall outside that describes him and has a great picture of Salam. You can see it even without entering the library, which is great because you often have families taking their children and showing them the picture and reading the narrative. It’ll spread his fame a bit more, which is really important and really lovely.

CdR: One thing that was clear in the build-up to the event in January was just how much his life story resonates with people at absolutely every level. No matter your background or whether you’re a scientist or not, I think Salam’s life awakens the scientist in all of us – he connects with people. But as the centenary of his birth draws closer, we want to build on those initiatives. Fundamental, curiosity-driven research is a way to make connections with the global south so we’re very much looking forward to an even bigger celebration for his 100th birthday in 2026.

  • A full version of this interview can be heard on the 8 August 2024 episode of the Physics World Weekly podcast.

Abdus Salam: driven to success

Abdus Salam

Abdus Salam, like all geniuses, was not a straightforward character. That much is made clear in the 2018 documentary movie Salam: the First ****** Nobel Laureate directed by Anand Kamalakar and produced by Zakir Thaver and Omar Vandal. Containing interviews with Salam’s friends, family members and former colleagues, Salam is variously described as being “charismatic”, “humane”, “difficult”, “impatient”, “sensitive”, “gorgeous”, “bright”, “dismissive” and “charming”.

Despite him being the first Nobel-prize winner from Pakistan, the film also wonders why he is relatively poorly known and unrecognized in his homeland. The movie argues that this was down to his religious beliefs. Most Pakistanis are Sunnis but Salam was an Ahmadi, part of a minor Islamic movement. Opposition in Pakistan to the Ahmadis even led to its parliament declaring them non-Muslims in 1974, forbidden from professing their creed in public or even worshipping in their own mosques.

Those edicts, which led to Salam’s religious beliefs being re-awakened, also saw him effectively being ignored by Pakistan (hence the title of the movie). However, Salam was throughout his life keen to support scientists from less wealthy nations, such as his own, which is why he founded the International Centre for Theoretical Physics (ICTP) in Trieste in 1964.

Celebrating its 60th anniversary this year, the ICTP now has 45 permanent research staff and brings together more than 6000 leading and early-career scientists from over 150 nations to attend workshops, conferences and scientific meetings. It also has international outposts in Brazil, China, Mexico and Rwanda, as well as eight “affiliated centres” – institutes or university departments with which the ICTP has formal collaborations.

Matin Durrani

The post Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist appeared first on Physics World.

]]>
Interview Claudia de Rham and Ian Walmsley pay tribute to the contributions of the great theorist Abdus Salam https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Durrani-Abdus-Salam-featured.jpg
3D printing creates strong, stretchy hydrogels that stick to tissue https://physicsworld.com/a/3d-printing-creates-strong-stretchy-hydrogels-that-stick-to-tissue/ Mon, 26 Aug 2024 10:00:41 +0000 https://physicsworld.com/?p=116440 A new 3D printing method fabricates entangled hydrogels for medical applications

The post 3D printing creates strong, stretchy hydrogels that stick to tissue appeared first on Physics World.

]]>
A new method for 3D printing, described in Science, makes inroads into hydrogel-based adhesives for use in medicine.

3D printers, which deposit individual layers of a variety of materials, enable researchers to create complex shapes and structures. Medical applications often require strong and stretchable biomaterials that also stick to moving tissues, such as the beating human heart or tough cartilage covering the surfaces of bones at a joint.

Many researchers are pursuing 3D printed tissues, organs and implants created using biomaterials called hydrogels, which are made from networks of crosslinked polymer chains. While significant progress has been made in the field of fabricated hydrogels, traditional 3D printed hydrogels may break when stretched or crack under pressure. Others are too stiff to sculpt around deformable tissues.

Researchers at the University of Colorado Boulder, in collaboration with the University of Pennsylvania and the National Institutes of Standards and Technology (NIST), realized that they could incorporate intertwined chains of molecules to make 3D printed hydrogels stronger and more elastic – and possibly even allow them to stick to wet tissue. The method, known as CLEAR, sets an object’s shape using spatial light illumination (photopolymerization) while a complementary redox reaction (dark polymerization) gradually yields a high concentration of entangled polymer chains.

To their knowledge, the researchers say, this is the first time that light and dark polymerization have been combined simultaneously to enhance the properties of biomaterials fabricated using digital light processing methods. No special equipment is needed – CLEAR relies on conventional fabrication methods, with some tweaks in processing.

“This was developed by a graduate student in my group, Abhishek Dhand, and research associate Matt Davidson, who were looking at the literature on entangled polymer networks. In most of these cases, the entangled networks that form hydrogels with high levels of certain material properties…are made with very slow reactions,” explains Jason Burdick from CU-Boulder’s BioFrontiers Institute. “This is not compatible with [digital light processing], where each layer is reacted through short periods of light. The combination of the traditional [digital light processing] with light and the slow redox dark polymerization overcomes this.”

Experiments confirmed that hydrogels produced with CLEAR were fourfold to sevenfold tougher than hydrogels produced with conventional digital light processing methods for 3D printing. The CLEAR-fabricated hydrogels also conformed and stuck to animal tissues and organs.

“We illustrated in the paper the application of hydrogels printed with CLEAR as tissue adhesives, as others had previously defined material toughness as an important material property in adhesives. Through CLEAR, we can then process these adhesives into any structures, such as porous lattices or introduce spatial adhesion that may be of interest for biomedical applications,” Burdick says. “What is also interesting is that CLEAR can be used with other types of materials, such as elastomers, and we believe that it can be used across broad manufacturing methods.”

CLEAR could also have environmentally friendly implications for manufacturing and research, the researchers suggest, by eliminating the need for additional light or heat energy to harden parts. The researchers have filed for a provisional patent and will be conducting additional studies to better understand how tissues react to the printed hydrogels.

“Our work so far was mainly proof-of-concept of the method and showing a range of applications,” says Burdick. “The next step is to identify those applications where CLEAR can make an impact and then further explore those topics, whether this is specific to biomedicine or more broadly beyond this.”

The post 3D printing creates strong, stretchy hydrogels that stick to tissue appeared first on Physics World.

]]>
Research update A new 3D printing method fabricates entangled hydrogels for medical applications https://physicsworld.com/wp-content/uploads/2024/08/26-08-24-3D-Printer-Matt-Davidson.jpg newsletter1
Drowsiness-detecting earbuds could help drivers stay safe at the wheel https://physicsworld.com/a/drowsiness-detecting-earbuds-could-help-drivers-stay-safe-at-the-wheel/ Thu, 22 Aug 2024 15:00:41 +0000 https://physicsworld.com/?p=116407 In-ear electroencephalography could protect drivers, pilots and machine operators from the dangers of fatigue

The post Drowsiness-detecting earbuds could help drivers stay safe at the wheel appeared first on Physics World.

]]>
Drowsiness plays a major role in traffic crashes, injuries and deaths, and is considered the most critical hazard in construction and mining. A wearable device that can monitor fatigue could help protect drivers, pilots and machine operators from the life-threatening dangers of fatigue.

With this aim, researchers at UC Berkeley are developing techniques to detect signs of drowsiness in the brain, using a pair of prototype earbuds to perform electroencephalography (EEG) and other physiological measurements. Describing the device in Nature Communications, the team reports successful tests on volunteers.

“Wireless earbuds are something we already wear all the time,” says senior author Rikky Muller in a press statement. “That’s what makes ear EEG such a compelling approach to wearables. It doesn’t require anything extra. I was inspired when I bought my first pair of Apple’s AirPods in 2017. I immediately thought, ‘What an amazing platform for neural recording’.”

Improved design

EEG uses multiple electrodes placed on the scalp to non-invasively monitor the brain’s electrical activity – such as the alpha waves that increase when a person is relaxed or sleepy. Researchers have also demonstrated that multi-channel EEG signals can be recorded from inside the ear canal, using in-ear sensors and electrodes.

Existing in-ear devices, however, mostly use wet electrodes (which necessitate skin-preparation and hydrogel on the electrodes), contain bulky electronics and require customized earpieces for each user. Instead, Muller and colleagues aimed to create an in-ear EEG with long-lifespan dry electrodes, wireless electronics and a generic earpiece design.

In-ear EEG device

The researchers developed a fabrication process based on 3D printing of a polymer earpiece body and electrodes. They then plated the electrodes with copper, nickel and gold, creating electrodes that remain stable over months of use. To ensure comfort for all users, they designed small, medium and large earpieces (with slightly different electrode sizes to maximize electrode surface area).

The final medium-sized earpiece contains four 60 mm2 in-ear electrodes, which apply outward pressure to lower the electrode–skin impedance and improve mechanical stability, plus two 3 cm2 out-ear electrodes. Signals from the earpiece are read out and transmitted to a base station by a low-power wireless neural recording platform (the WANDmini) affixed to a headband.

Drowsiness study

To assess the earbuds’ performance, the team recorded 35 h of electrophysiological data from nine volunteers. Subjects wore two earpieces and did not prepare their skin beforehand or apply hydrogel to the electrodes. As well as EEG, the device measured signals such as heart beats (using electrocardiography) and eye movements (via electrooculography), collectively known as ExG.

To induce drowsiness, subjects played a repetitive reaction time game for 40–50 min. During this task, they rated their drowsiness every 5 min on the Karolinska Sleepiness Scale (KSS). The measured ExG data, reaction times and KSS ratings were used to generate labels for classifier models. Data were labelled as “drowsy” if the user reported a KSS score of 5 or higher and their reaction time had more than doubled since the first 5 min.

To create the alert/drowsy classifier, the researchers extracted relevant temporal and spectral features in standard EEG frequency bands (delta, theta, alpha, beta and gamma). They used these data to train three low-complexity machine learning models: logistic regression, support vector machines (SVM) and random forest. They note that spectral features associated with eye movement, relaxation and drowsiness were the most important for model training.

All three classifier models achieved high accuracy, with comparable performance to state-of-the-art wet electrode systems. The best-performing model (utilizing a SVM classifier) achieved an average accuracy of 93.2% when evaluating users it had seen before and 93.3% with never-before-seen users. The logistic regression model, meanwhile, is more computationally efficient and requires significantly less memory.

The researchers conclude that the results show promise for developing next-generation wearables that can monitor brain activity in work environments and everyday scenarios. Next, they will integrate the classifiers on-chip to enable real-time brain-state classification. They also intend to miniaturize the hardware to eliminate the need for the WANDmini.

“We plan to incorporate all of the electronics into the earbud itself,” Muller tells Physics World. “We are working on earpiece integration, and new applications, including the use of earbuds during sleep.”

The post Drowsiness-detecting earbuds could help drivers stay safe at the wheel appeared first on Physics World.

]]>
Research update In-ear electroencephalography could protect drivers, pilots and machine operators from the dangers of fatigue https://physicsworld.com/wp-content/uploads/2024/08/22-08-24-ear-EEG-fig1.jpg
Physics for a better future: mammoth book looks at science and society https://physicsworld.com/a/physics-for-a-better-future-mammoth-book-looks-at-science-and-society/ Thu, 22 Aug 2024 12:24:13 +0000 https://physicsworld.com/?p=116412 Our podcast guest is Christophe Rossel, co-author of EPS Grand Challenges

The post Physics for a better future: mammoth book looks at science and society appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast explores how physics can be used as a force for good – helping society address important challenges such as climate change, sustainable development, and improving health.

Our guest is the Swiss physicist Christophe Rossel, who is a former president of the European Physical Society (EPS) and an emeritus scientist at IBM Research in Zurich.

Rossel is a co-editor and co-author of the book EPS Grand Challenges, which looks at how science and physics can help drive positive change in society and raise standards of living worldwide as we approach the middle of the century. The huge tome weighs in at 829 pages, was written by 115 physicists and honed by 13 co-editors.

Rossel talks to Physics World’s Matin Durrani about the intersection of science and society and what physicists can do to make the world a better place.

The post Physics for a better future: mammoth book looks at science and society appeared first on Physics World.

]]>
Podcasts Our podcast guest is Christophe Rossel, co-author of EPS Grand Challenges https://physicsworld.com/wp-content/uploads/2024/08/21-8-24-Christophe-Rossel-list.jpg newsletter
Quantum sensor detects magnetic and electric fields from a single atom https://physicsworld.com/a/quantum-sensor-detects-magnetic-and-electric-fields-from-a-single-atom/ Thu, 22 Aug 2024 09:30:12 +0000 https://physicsworld.com/?p=116396 New device is like an MRI machine for quantum materials, say physicists

The post Quantum sensor detects magnetic and electric fields from a single atom appeared first on Physics World.

]]>
Researchers in Germany and Korea have fabricated a quantum sensor that can detect the electric and magnetic fields created by individual atoms – something that scientists have long dreamed of doing. The device consists of an organic semiconducting molecule attached to the metallic tip of a scanning tunnelling microscope, and its developers say that it could have applications in biology as well as physics. Some possibilities include sensing the presence of spin-labelled biomolecules and detecting the magnetic states of complex molecules on a surface.

Today’s most sensitive magnetic field detectors exploit quantum effects to map the presence of extremely weak fields. Among the most promising of these new-generation quantum sensors are nitrogen vacancy (NV) centres in diamond. These structures can be fabricated inside a nanopillar on the tip of an atomic force microscope (AFM) tip, and their spatial resolution is an impressively small 10–100 nm. However, this is still a factor of 10 to 100 larger than the diameter of an atom.

A spatial resolution of 0.1 nm

The new sensor developed by Andreas Heinrich and colleagues at the Forschungszentrum Jülich and Korea’s IBS Center for Quantum Nanoscience (QNS) can also be placed on a microscope tip – in this case, a scanning tunnelling microscope (STM). The difference is the spatial resolution of this atomic-scale device is just 0.1 nm, making it 100 to 1000 times more sensitive than devices based on NV centres.

The team made the sensor by attaching a molecule with an unpaired electron – a molecular spin – to the apex of an STM’s metallic tip. “Typically, the lifetime of a spin in direct contact with a metal is very short and cannot be controlled,” explains team member Taner Esat, who was previously at QNS and is now at Jülich. “In our approach, we brought a planar molecule known as 3,4,9,10-perylenetetracarboxylic-dianhydride (or PTCDA for short) into a special configuration on the tip using precise atomic-scale manipulation, thus decoupling the molecular spin.”

Determining the magnetic field of a single atom

In this configuration, Esat explains that the molecule is a spin ½ system, and in the presence of a magnetic field, it behaves like a two-level quantum system. This behaviour is due to the Zeeman effect, which splits the molecule’s ground state into spin-up and spin-down states with an energy difference that depends on the strength of the magnetic field. Using electron spin resonance in the STM, the researchers were able to detect this energy difference with a resolution of around ~100 neV. “This allowed us to determine the magnetic field of a single atom (which finds itself only a few atomic distances away from the sensor) that caused the change in spin states,” Esat tells Physics World.

The team demonstrated the feasibility of its technique by measuring the magnetic and electric dipole fields from a single iron atom and a silver dimer on a gold substrate with greater than 0.1 nm resolution.

The next step, says Esat, is to increase the new device’s magnetic field sensitivity by implementing more advanced sensing protocols based on pulsed electron spin resonance schemes and by finding molecules with longer spin decoherence times. “We hope to increase the sensitivity by a factor of about 1000, which would allow us to detect nuclear spins at the atomic scale,” he says.

A holy grail for quantum sensing

The new atomic-scale quantum magnetic field sensor should also make it possible to resolve spins in certain emerging two-dimensional quantum materials. These materials are predicted to have many complex magnetic orders, but they cannot be measured with existing instruments, Heinrich and his QNS colleague Yujeong Bae note. Another possibility would be to use the sensor to study so-called encapsulated spin systems such as endohedral-fullerenes, which comprise a magnetic core surrounded by an inert carbon cage.

“The holy grail of quantum sensing is to detect individual nuclear spins in complex molecules on surfaces,” Heinrich concludes. “Being able to do so would make for a magnetic resonance imaging (MRI) technique with atomic-scale spatial resolution.”

The researchers detail their sensor in Nature Nanotechnology. They have also prepared a video to illustrate the working principle of the device and how they fabricated it.

The post Quantum sensor detects magnetic and electric fields from a single atom appeared first on Physics World.

]]>
Research update New device is like an MRI machine for quantum materials, say physicists https://physicsworld.com/wp-content/uploads/2024/08/Low-Res_2024_06_19_Esat_005.jpg
Software expertise powers up quantum computing https://physicsworld.com/a/software-expertise-powers-up-quantum-computing/ Wed, 21 Aug 2024 14:37:58 +0000 https://physicsworld.com/?p=116387 Combining research excellence with a direct connection to the National Quantum Computing Centre, the Quantum Software Lab is focused on delivering effective solutions to real-world problems

The post Software expertise powers up quantum computing appeared first on Physics World.

]]>
Making a success of any new venture can be a major challenge, but it always helps to have powerful partnerships. In the case of the Quantum Software Lab (QSL), established in April 2023 as part of the University of Edinburgh’s School of Informatics, its position within one of the world’s leading research centres for computer science offers direct access to expertise spanning everything from artificial intelligence through to high-performance computing. But the QSL also has a strategic alliance with the UK’s National Quantum Computing Centre (NQCC), providing a gateway to emerging hardware platforms and opening up new opportunities to work with end users on industry-relevant problems.

Bringing those worlds together is Elham Kashefi, who is both the director of the QSL and Chief Scientist of the NQCC. In her dual role, Kashefi is able to connect and engage with the global research community, while also exploiting her insights and ideas to shape the technology programme at the national lab. “Elham Kashefi is the most vibrant and exuberant character, and she has all the right attitudes to bring diverse people together to tackle the big challenges we are facing in quantum computing,” says Sir Peter Knight, the architect behind the UK’s National Quantum Technologies Programme. “Elham has the ability to apply insights from her background in computer science in a way that helps physicists like me to make the hardware work more effectively.”

The QSL’s connection to the NQCC imbues its activities with a strong focus on innovation, centring its development programme around the objective of demonstrating quantum utility – in other words, delivering reliable and accurate quantum solutions that offer a genuine improvement over classical computing. “Our partnership with the QSL is all about driving user adoption,” says NQCC director Michael Cuthbert. “The NQCC can provide a front door to the end-user community and raise awareness of the potential of quantum computing, while our colleagues in Edinburgh bring the academic expertise and rigour to translate the mathematics of quantum theory into use cases and applications that benefit all parts of our society and the economy.”

Since its launch, the QSL has become the largest research group for quantum software and algorithm development in the UK, with more than 50 researchers and PhD students. This core team is also supported by number of affiliate members from across the University of Edinburgh, notably the EPCC supercomputing centre, as well as from the Sorbonne University in France, where Kashefi also has a research role.

Within this extended network Kashefi and her faculty team have been working to establish a research culture that is based on collective success rather than individual endeavour. “There is so much discovery and innovation happening right now, and we set ourselves the goal of bringing disparate pieces together to establish a coherent programme,” she explains. “What has made me very happy is that we are now focusing on what we can achieve by combining our knowledge and expertise, rather than what we can do on our own.”

Within the Lab’s core programme, the Quantum Advantage Pathfinder, the primary goal is to work with end users in industry and the public sector to identify key computational roadblocks and translate them into research problems that can be addressed with quantum techniques. Once an algorithm has been devised and implemented, a crucial step of the process is to benchmark the solution to assess what sort of benefit it might offer over a conventional supercomputer.

“We are all academic researchers, but within the QSL we are nurturing a start-up culture where we want to understand and address the needs of the ecosystem,” says Kashefi. “For each project we are following the full pathway from the initial pain point identified by our industry partners through to a commercial application where we can show that quantum computing has delivered a genuine advantage.”

In just one example, application engineers from the NQCC and software developers from the QSL have been working with the high-street bank HSBC to explore the benefits of quantum computing for tackling the growing problem of financial fraud. HSBC already exploits classical machine learning to detect anomalous transactions that could indicate criminal behaviour, and the project team – which also includes hardware provider Rigetti – has been investigating whether quantum machine learning could deliver an advantage that would reduce risk and enable the bank to improve its anti-fraud services.

Quantum Software Lab

Alongside these problem-focused projects, the discovery-led nature of the academic environment also provides the QSL with the freedom to reverse the pipeline: to develop optimal approaches for a class of quantum algorithms or protocols that could be relevant for many different application areas. One project, for example, is investigating how hybrid quantum/classical algorithms could be exploited to solve big data problems using a small-scale quantum computer, while another is developing a unified benchmarking approach that could be applied across different hardware architectures.

For the NQCC, meanwhile, Cuthbert believes that the insights gained from this more universal approach will be crucial for planning future activities at the national lab. “Theoretical advances that are focused on the practical utilization of quantum computing will inform our technology programme and help us to build an effective quantum ecosystem,” he says. “It is vitally important that we understand how different elements of theory are developing, and what new techniques and discoveries are emerging in classical computing.”

Indeed, the importance of theory and informatics for accelerating the development of useful quantum computing is underlined by the QSL’s leading role in two of the new quantum hubs that were launched by the UK government at the end of July. For the one that will be focused on quantum computing, which is based at the University of Oxford, QSL researchers will take the lead on developing software tools that will help to extract more power from emerging quantum hardware, such as quantum error correction, distributed quantum computing, and hybrid quantum/classical algorithms. The QSL team will also investigate novel protocols for secure multi-party computing through its partnership with the Integrated Quantum Networks hub, which is being led by Heriot-Watt University.

Sir Peter Knight

At the same time, the QSL’s direct link to the NQCC will help to ensure that these software tools advance in tandem with the rapidly evolving capabilities of the quantum processors. “You need a marriage between the hardware and software to drive progress and work out where the roadblocks are,” comments Sir Peter. “Continuous feedback between algorithm development, the design of the quantum computing stack, and the physical constraints of the hardware creates a virtuous circle that produces better results within a shorter timeframe.”

An integral part of that accelerated co-development is the NQCC’s development of hardware platforms based on superconducting qubits, trapped ions and neutral atoms, while the national lab is also set to host seven quantum testbeds that are now being installed by commercial hardware developers. Once the testbeds are up and running in March 2025, there will be a two-year evaluation phase in which QSL researchers and the UK’s wider quantum community will be able to work with the NQCC and the hardware companies to understand the unique capabilities of each technology platform, and to investigate which qubit modalities are most suited to solving particular types of problems.

One key focus for this collaborative work will be developing and testing novel schemes for error correction, since it is becoming clear that quantum machines with even modest numbers of qubits can address complex problems if the noise levels can be reduced. Researchers at the QSL are now working to translate recent theoretical advances into software that can run on real computer architectures, with the testbeds providing a unique opportunity to investigate which error-correction codes can deliver the optimal results for each qubit modality.

Supporting these future endeavours will be a new Centre for Doctoral Training (CDT) for Quantum Informatics, led by the University of Edinburgh in collaboration with the University of Oxford, University College London, the University of Strathclyde and Heriot-Watt University.

“As part of their training, each cohort will spend two weeks at the NQCC, enabling the students to learn key technical skills as well as gaining an understanding of wider issues, such as the importance of responsible and ethical quantum computing,” says CDT director Chris Heunen, a senior member of the QSL team. “During their placement the students will also work with the NQCC’s applications engineers to solve a specific industry problem, exposing them to real-world use cases as well as the hardware resources installed at the national lab.”

With the CDT set to train around 80 PhD students over the next eight years, Kashefi believes that it will play a vital role in ensuring the long-term sustainability of the QSL’s programme and the wider quantum ecosystem. “We need to train a new generation of quantum innovators,” she says. “Our CDT will provide a unique programme for enabling young people to learn how to use a quantum computer, which will help us in our goal to deliver innovative solutions that derive real value from quantum technologies.”

The post Software expertise powers up quantum computing appeared first on Physics World.

]]>
Analysis Combining research excellence with a direct connection to the National Quantum Computing Centre, the Quantum Software Lab is focused on delivering effective solutions to real-world problems https://physicsworld.com/wp-content/uploads/2024/08/web-QSL-launch-2023.jpg newsletter
Vacuum-sealed tubes could form the backbone of a long-distance quantum network https://physicsworld.com/a/vacuum-sealed-tubes-could-form-the-backbone-of-a-long-distance-quantum-network/ Wed, 21 Aug 2024 14:00:15 +0000 https://physicsworld.com/?p=116394 Theoretical study proposes a "revolutionary" new method for constructing the future quantum Internet

The post Vacuum-sealed tubes could form the backbone of a long-distance quantum network appeared first on Physics World.

]]>
A network of vacuum-sealed tubes inspired by the “arms” of the LIGO gravitational wave detector could provide the foundations for a future quantum Internet. The proposed design, which its US-based developers describe as both “revolutionary” and feasible, could support communication rates as high as 1013 quantum bits (qubits) per second. This would exceed currently-available quantum channels based on satellites or optical fibres by at least four orders of magnitude, though members of the team note that implementing the design will be challenging.

Quantum computers outperform their classical counterparts at certain problems. Realizing their full potential, however, will require connecting multiple quantum machines via a network that can transmit quantum information over long distances, just as the Internet does with classical information.

One way of creating such a network would be to use existing technologies such as fibre optics cables or satellites. Both technologies transmit classical information using photons, and in principle they can transmit quantum information using photonic qubits, too. The problem is that they are inherently “lossy”, with photons being absorbed by the fibre or (to a lesser degree) by the Earth’s atmosphere on their way to and from the vacuum of space. This loss of information is particularly challenging for quantum networks, as qubits cannot be “copied” in the same way that classical bits can.

Inspired by LIGO

The proposal put forward by Liang Jiang and colleagues at the University of Chicago’s Pritzker School of Molecular Engineering, Stanford University and the California Institute of Technology aims to solve this problem by combining the advantages of satellite- and fibre-based communications. “In a vacuum, you can send a lot of information without attenuation,” explains team member Yesun Huang, the lead author of a Physical Review Letters paper on the proposal. “But being able to do that on the ground would be ideal.”

The new design for a long-distance quantum network involves connecting quantum channels made from vacuum-sealed tubes fitted with a series of lenses. These vacuum beam guides (VBGs), as they are known, measure around 20 cm in diameter, and Huang says they could span thousands of kilometres while supporting the transmission of 10 trillion qubits per second. “Photons carrying quantum information could travel through these tubes with the lenses placed every few kilometres in the tubes to ensure they do not spread out too much and stay focused,” he explains.

Infographic showing a map of the US with "backbone" vacuum quantum channels connecting several major cities, supplemented with shorter fibre-based communication channels reaching smaller hubs. A smaller diagram shows the positioning of lenses along the vacuum channel between quantum nodes.

The new design is inspired by the system that the Laser Interferometer Gravitational-Wave Observatory (LIGO) experiment employs to detect gravitational waves. In LIGO, twin laser beams travel down two tubes – the “arms” of the interferometer – that are arranged in an L-shape and kept under ultrahigh vacuum. Mirrors precisely positioned at the ends of each arm reflect the laser light back down the tubes and onto a detector. When a gravitational wave passes through this set-up, it distorts the distance travelled by each laser beam by a tiny but detectable amount.

Engineering challenges, but a big payoff

While LIGO’s arms measure 4::km in length, the tubes in Jiang and colleagues’ experiments could be much smaller. They would also need only a moderate vacuum of 10-4 atmospheres of pressure as opposed to LIGO’s 10-11 atm. Even so, the researchers acknowledge that implementing their technology will not be simple, with several civil engineering issues still to be addressed.

For the moment, the team is focusing on small-scale experiments to characterize the VBGs’ performance. But members are thinking big. “Our hope is to realize these channels over a continental scale,” Huang tells Physics World.

The benefits of doing so would be significant, he argues. “As well as benefiting secure quantum communication (quantum key distribution protocols, for example), the new VBG channels might also be employed in other quantum applications,” he says. As examples, he cites ultra-long-baseline optical telescopes, quantum networks of clocks, quantum data centres and delegated quantum computing.

Jiang adds that with the entanglement created from VBG channels, the researchers also hope to improve the performance of coordinating decisions between remote parties using so-called quantum telepathy – a phenomenon whereby two non-communicating parties can exhibit correlated behaviours that would be impossible to achieve using classical methods.

The post Vacuum-sealed tubes could form the backbone of a long-distance quantum network appeared first on Physics World.

]]>
Research update Theoretical study proposes a "revolutionary" new method for constructing the future quantum Internet https://physicsworld.com/wp-content/uploads/2024/08/21-08-2024-Liang-Jiang.jpg newsletter1
Solar-driven atmospheric water extractor provides continuous freshwater output https://physicsworld.com/a/solar-driven-atmospheric-water-extractor-provides-continuous-freshwater-output/ Wed, 21 Aug 2024 10:15:52 +0000 https://physicsworld.com/?p=116399 Standalone device harvests water out of air without requiring maintenance, solely using sunlight

The post Solar-driven atmospheric water extractor provides continuous freshwater output appeared first on Physics World.

]]>
Freshwater scarcity affects 2.2 billion people around the world, especially in arid and remote regions. More work needs to be done to develop new technologies that can provide freshwater in regions where there is a lack of suitable water for drinking and irrigation. Harvesting moisture from the air is one approach that has been trialled over the years with varying degrees of success.

“Water scarcity is one of the major challenges faced by the globe, which is particularly important in Middle East regions. Depending on the local conditions, one needs to identify all possible water sources to get fresh water for our daily use,” explains Qiaoqiang Gan, from King Abdullah University of Science and Technology (KAUST).

Gan and his team have recently developed a solar-driven atmospheric water extraction (SAWE) device that can continuously harvest moisture from the air to supply clean water to people in humid climates.

New development in an existing area

Technologies for harvesting water from the air have been around for many years, but SAWEs have faced various obstacles – one of the main being slow kinetics in the sorbent materials. In SAWEs, the sorbent material first captures moisture from the air. Once saturated, the system is sealed and exposed to sunlight to extract the water.

The slow kinetics means that only one cycle is possible per day with most devices, so they have traditionally worked using a two-stage approach – moisture capture at night and desorption via sunlight during the day. Many systems have low outputs, and require manual switching between cycles, so they cannot provide continuous water harvesting.

This could be about to change, because the system developed by Gan and colleagues can produce water continuously. “We can use the extracted water from the air for irrigation with no need for tap water. This is an attractive technology for regions with humid air but no access to fresh water,” says Gan.

Continuous water production

The SAWE developed at KAUST passively alternates between the two stages and can cycle continuously without human intervention. This was made possible by the inclusion of mass transport bridges (MTBs) that provide a connection between the water capture and water generation mechanisms.

The MTBs comprise vertical microchannels filled with a salt solution to absorb water from the atmosphere. Once saturated, the water-rich salt solution is pulled up via capillary action into an enclosed high-temperature chamber. Here, a solar absorber generates concentrated vapour, which then condenses on the chamber wall, producing freshwater. The concentrated salt solution then diffuses back down the channel to collect more water.

Under 1-sun illumination at 90% relative humidity, a prototype SAWE system with an evaporation area of 3 × 3 cm consistently produced fresh water at a rate of 0.65 L/m2/h. The researchers found that the system could also function in more arid environments with relative humidity as low as 40% and that – in regions with abundant solar irradiance and high humidity – it had a maximum water production potential of 4.6 L/m2 per day.

Scaling up in Saudi Arabia

Following the initial tests, the researchers built a scaled-up system (with an evaporation area of 13.5 × 24 cm) in Thuwal, Saudi Arabia, that was just as affordable and simple to produce as the small-scale prototype. They tested the system over 35 days across two seasons.

“Saudi Arabia launched an aggressive initiative known as Saudi Green Initiative, aiming to plant 10 billion trees in the country. The key challenge is to get fresh water for irrigation,” Gan explains. “Our technology provided a potential solution to address the water needs in suitable regions like the core area near the Red Sea and Arabic Bay, where they have humid air but no sufficient fresh water.”

The tests in Saudi Arabia showed that the scaled-up system could produce 2–3 L/m2 of freshwater per day during summer and 1–2.8 L/m2 per day during the autumn. The water harvested was also used for off-grid irrigation of Chinese cabbage plants in the local harvesting area, showing its potential for use in remote areas that lack access to large-scale water sources.

Looking ahead, Gan tells Physics World that “we are developing prototypes for the atmospheric water extraction module to irrigate plants and trees, as the water productivity can meet the water needs of many plants in their seeding stage”.

The research is described in Nature Communications.

The post Solar-driven atmospheric water extractor provides continuous freshwater output appeared first on Physics World.

]]>
Research update Standalone device harvests water out of air without requiring maintenance, solely using sunlight https://physicsworld.com/wp-content/uploads/2024/08/21-08-24-solar-powered-water-extractor.jpg newsletter1
Half-life measurement of samarium-146 could help reveal secrets of the early solar system https://physicsworld.com/a/half-life-measurement-of-samarium-146-could-help-reveal-secrets-of-the-early-solar-system/ Tue, 20 Aug 2024 15:51:10 +0000 https://physicsworld.com/?p=116378 Isotope is extracted from an accelerator target

The post Half-life measurement of samarium-146 could help reveal secrets of the early solar system appeared first on Physics World.

]]>
The radioactive half-life of samarium-146 has been measured to the highest accuracy and precision so far. Researchers at the Paul Scherrer Institute (PSI) in Switzerland and the Australian National University in Canberra made their measurement using waste from the PSI’s neutron source and the result should help scientists gain a better understanding of the history of the solar system.

With a half-life of 92 million years, samarium-146 is ideally suited for dating events that occurred early in the history of the solar system. These include volcanic activity on the Moon, the formation of meteorites, and the differentiation of Earth’s interior into distinct layers.

Samarium-146 in the early solar system was probably produced in a nearby supernova as our galaxy was forming about 4.5 billion years ago. Thanks to the isotope’s relatively long half-life, it would have been incorporated into nascent planets and asteroids. The isotope then slowly vanished from the solar system. It is now so rare that it is considered an extinct isotope, whose previous existence is inferred from the presence of the neodymium isotope to which it decays.

There is another isotope, samarium-147, with a half-life that is 1000 times longer than samarium-146. While the two isotopes have identical chemical properties, samarium-147 currently accounts for about 15% of samarium on Earth. Together, these two isotopes can be used for dating rocks, but only if their half-lives are known to sufficiently high accuracy.

Huge range

Unfortunately, the half-life of samarium-146 has proven notoriously difficult to measure. Over the past few decades, numerous studies have placed its value somewhere between 60 and 100 million years, but its exact value within this range has remained uncertain. The main reason for this uncertainty is that the isotope does not occur naturally on Earth and instead is made in tiny quantities in nuclear physics experiments.

In previous studies, the isotope was created by irradiating other samarium isotopes with protons or neutrons. However, this approach has drawbacks. “The main disadvantages are the cost and time required for dedicated irradiation and the fact that the desired isotope is made of the same element as the target material itself,” explains Rugard Dressler at PSI’s Laboratory for Radiochemistry. “This rules out the possibility of separating samarium-146 by chemical means alone.”

To overcome these limitations, a team led by Dorothea Schumann at PSI looked to the Swiss Spallation Neutron Source (SINQ) as a source of the isotope. SINQ creates neutrons by smashing protons into solid targets, which are damaged in the process. To better understand how this damage occurs, a range of different target materials have been irradiated at SINQ. This included tantalum, which Schumann identified as the most promising material to extract a quantity of samarium-146 in solution using a sequence of highly selective radiochemical separation and purification steps.

“Only in this way it was possible to obtain a sufficient amount of samarium-146 for the precise determination of its half-life – a possibility that is not available anywhere else around the world,” explains PSI’s Zeynep Talip.

Then they used some of the solution to create a thin layer of samarium oxide on a graphite substrate. Using mass spectrometers at PSI and in Australia to study their original solution, the team determined that there were  6.28×1013 samarium-146 nuclei in their sample.

Alpha particles

The sample was place at a well-defined distance from a carefully calibrated alpha radiation detector. By measuring the energy of emitted alpha particles, the team confirmed that the particles were produced by the decay of samarium-146. Over the course of three months, they measured the isotope’s decay rate and found it to be just under 54 decays per hour.

From this, they calculated the samarium-146 half-life to be 92 million years, with an uncertainty of just 2.6 million years.

“The half-life derived in our study shows that the results from the last century are compatible with our value within their uncertainties,” Dressler notes. “Furthermore, we were able to reduce the uncertainty considerably.”

This result marks an important breakthrough in an experimental challenge that has persisted for decades, and could soon provide a new window into the distant past. “A more precise determination of the half-life of will pave the way for a more detailed and accurate chronology of processes in our solar system and geological events on Earth,” says Dressler.

The research is described in Scientific Reports.

The post Half-life measurement of samarium-146 could help reveal secrets of the early solar system appeared first on Physics World.

]]>
Research update Isotope is extracted from an accelerator target https://physicsworld.com/wp-content/uploads/2024/08/20-8-24-Samarium-half-life.jpg
Enabling battery quality at scale https://physicsworld.com/a/enabling-battery-quality-at-scale/ Tue, 20 Aug 2024 13:59:59 +0000 https://physicsworld.com/?p=115702 Join the audience for a live webinar on 18 September 2024 sponsored by BioLogic, in partnership with The Electrochemical Society

The post Enabling battery quality at scale appeared first on Physics World.

]]>

Battery quality lies at the heart of major issues relating to battery safety, reliability, and manufacturability. This talk reviews the challenges and opportunities to enable battery quality at scale. First, the interplay between various battery failure modes and their numerous root causes is described. Then, which failure modes are best detected by electrochemistry, and which are not, is discussed. Finally, how improved inspection – specifically, high-throughput computed tomography (CT) – can play a role in solving the battery quality challenge is reviewed.

An interactive Q&A session follows the presentation.

Peter Attia is co-founder and chief technical officer of Glimpse. Previously, he worked as an engineering lead on some of Tesla’s toughest battery failure modes and managed a team focused on battery data analysis. Peter holds a PhD from Stanford, where he developed seminal machine learning methods for battery lifetime prediction and optimization. He has received honours such as Forbes 30u30 but has not written a bestselling book on aging.

The Electrochemical Society

 

The post Enabling battery quality at scale appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 18 September 2024 sponsored by BioLogic, in partnership with The Electrochemical Society https://physicsworld.com/wp-content/uploads/2024/07/ECS_image_2024_09_18.jpg
AI-assisted photonic detector identifies fake semiconductor chips https://physicsworld.com/a/ai-assisted-photonic-detector-identifies-fake-semiconductor-chips/ Tue, 20 Aug 2024 12:35:14 +0000 https://physicsworld.com/?p=116361 New technique could reduce risks of unwanted surveillance, chip failure and theft, say researchers

The post AI-assisted photonic detector identifies fake semiconductor chips appeared first on Physics World.

]]>
Diagram of the RAPTOR detection system

The semiconductor industry is an economic powerhouse, but it is not without its challenges. As well as shortages of new semiconductor chips, it increasingly faces an oversupply of counterfeit ones. The spread of these imitations poses real dangers for the many sectors that rely on computer chips, including aviation, finance, communications, artificial intelligence and quantum technologies.

Researchers at Purdue University in the US have now combined artificial intelligence (AI) and photonics technology to develop a robust new method for detecting counterfeit chips. The new method could reduce the risks of unwanted surveillance, chip failure and theft within the $500 bn global semiconductor industry by reining in the market for fake chips, which is estimated at $75 bn.

The main way of detecting counterfeit semiconductor chips relies on “baking” security tags into chips or their packaging. Such tags work using technologies such as physical unclonable functions made from media such as arrays of metallic nanomaterials. These structures can be engineered to scatter light strongly in specific patterns that can be detected and used as a “fingerprint” for the tagged chip.

The problem is that these security structures are not tamper-proof. They can degrade naturally – for example, if temperatures get too high. If they are printed on packaging, they can also be rubbed off, either accidentally or intentionally.

Embedded gold nanoparticles

The Purdue researchers developed an alternative optical anti-counterfeiting technique for semiconductor devices based on identifying modifications in the patterns of light scattered off nanoparticle arrays embedded in chips or chip packaging. Their approach, which they call residual attention-based processing of tampering response (RAPTOR), relies on analysing the light scattered before and after an array has degraded naturally or been tampered with.

To make the technique work, a team led by electrical and computer engineer Alexander Kildishev embedded gold nanoparticles in the packaging of a packet of semiconductor chips. The team then took several dark-field microscope images of random places on the packaging to record the nanoparticle scattering patterns. This made it possible to produce high-contrast images even though the samples being imaged are transparent to light and provide little to no light absorption contrast. The team then stored these measurements for later authentication.

“If someone then tries to swap the chip, they not only have to embed the gold nanoparticles, but they also have to place them all in the original locations,” Kildishev explains.

The role of artificial intelligence

To guard against false positives caused by natural abrasions disrupting the nanoparticles, or a malicious actor getting close to replacing the nanoparticles in the right way, the team trained an AI model to distinguish between natural degradation and malicious tampering. This was the biggest challenge, Kildishev tells Physics World. “It [the model] also had to identify possible adversarial nanoparticle filling to cover up a tampering attempt,” he says.

Writing in Advanced Photonics, the Purdue researchers show that RAPTOR outperforms current state-of-the-art counterfeit detection methods (known as the Hausdorff, Procrustes and average Hausdorff metrics) by 40.6%, 37.3%, and 6.4% respectively. The analysis process takes just 27 ms, and it can verify a pattern’s authenticity in 80 ms with nearly 98% accuracy.

“We took on this study because we saw a need to improve chip authentication methods and we leveraged our expertise in AI and nanotechnology to do just this,” Kildishev says.

The Purdue researchers hope that other research groups will pick up on the possibilities of combining AI and photonics for the semiconductor industry. This would help advance deep-learning-based anti-counterfeiting methods, they say.

Looking forward, Kildishev and colleagues plan to improve their nanoparticle embedding process and streamline the authentication steps further. “We want to quickly convert our approach into an industry solution,” Kildishev says.

The post AI-assisted photonic detector identifies fake semiconductor chips appeared first on Physics World.

]]>
Research update New technique could reduce risks of unwanted surveillance, chip failure and theft, say researchers https://physicsworld.com/wp-content/uploads/2024/08/circuit-board-20436508-iStock_Henrik5000.jpg
Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power https://physicsworld.com/a/fast-monte-carlo-dose-calculation-with-precomputed-electron-tracks-and-gpu-power/ Tue, 20 Aug 2024 08:59:15 +0000 https://physicsworld.com/?p=116282 Join the audience for a live webinar on 24 September 2024 sponsored by LAP GmbH Laser Applikationen

The post Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power appeared first on Physics World.

]]>

In this webinar, we will explore innovative advancements in Monte Carlo based dose calculations that are poised to impact radiation oncology quality assurance. This expert session will focus on new developments in 3D dose calculation engines and improved dosimetry capabilities.

Designed for medical physics and dosimetrist experts, the discussion will outline the latest developments and emphasize how these can improve dose calculation accuracy, treatment verification processes, and clinical workflows in general. Join us in understanding better how fast Monte Carlo can contribute to advancing quality assurance in radiation therapy.

An interactive Q&A session follows the presentation.

Veng Jean Heng, PhD, is a medical physics resident at Stanford University. He received both an MSc and a PhD from McGill University. During his MSc, he performed Monte Carlo beam and dose-to-outcome modelling for CyberKnife patients. His PhD was on the clinical implementation of a mixed photon-electron beam radiation therapy technique. His current research interests revolve around the development of dose calculation and optimization methods.

 

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

 

The post Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 24 September 2024 sponsored by LAP GmbH Laser Applikationen https://physicsworld.com/wp-content/uploads/2024/08/20240924_LAP-image-1.jpg
Could targeted alpha therapy help treat Alzheimer’s disease? https://physicsworld.com/a/could-targeted-alpha-therapy-help-treat-alzheimers-disease/ Tue, 20 Aug 2024 08:30:00 +0000 https://physicsworld.com/?p=116342 Researchers demonstrate that targeted alpha-particle treatments can reduce the level of amyloid-beta plaque in mouse brain tissues

The post Could targeted alpha therapy help treat Alzheimer’s disease? appeared first on Physics World.

]]>
Alzheimer’s disease is a neurodegenerative disorder with limited treatment options. The causes of Alzheimer’s are complex and not entirely understood. It is commonly thought, however, that the build-up of amyloid-beta plaques and tangles of tau proteins in the brain leads to nerve cell death and dementia. A team at the University of Utah is investigating a new way to use radiation to reduce such deposits and potentially alleviate Alzheimer’s symptoms.

Developing a therapy for Alzheimer’s disease is a key goal for many researchers. One recent study, for example, showed evidence that reducing amyloid-beta plaques with a newly approved antibody-based drug improved cognition in patients with early-stage Alzheimer’s. Alongside, scientists are studying non-pharmacological approaches such as whole-brain, low-dose ionizing radiation, which has been shown to break up plaques in mice and exhibited a positive cognitive effect in preliminary clinical studies.

While promising, whole-brain irradiation unavoidably delivers radiation dose to healthy tissues. Instead, the University of Utah team is exploring the potential of targeted alpha therapy (TAT) to reduce amyloid plaque concentrations while minimizing damage to healthy tissue and the associated side effects.

“Our goal was to build on these studies and, as opposed to irradiating the whole brain, target the plaques specifically,” explains lead author Tara Mastren. “TAT could have potential benefits compared to the current antibody treatment, as much smaller doses are required to achieve an effect. Currently, it is hard to say if it will be better as this is new territory and studies need to be done to prove that.”

Targeted irradiation

TAT works by delivering an alpha particle-emitting radionuclide directly to a target, where it releases energy into its immediate surroundings. As alpha particles only travel a few micrometres in tissue, they deliver a highly localized dose. The approach has already proved effective for treating metastatic cancers, and the Utah team postulated that it could also be used to break bonds within amyloid-beta aggregates and facilitate plaque clearance.

To perform TAT, Mastren and colleagues synthesized a compound called BPy (a benzofuran pyridyl derivative) that targets amyloid-beta plaques. They linked BPy to the radionuclide bismuth-213 (213Bi), which has a short half-life of 46 min and decays by emitting a single alpha particle, thereby creating [213Bi]-BiBPy.

To examine whether TAT could reduce amyloid-beta concentrations, the researchers incubated [213Bi]-BiBPy with homogenates created from the brain tissue of mice genetically modified to develop amyloid plaques. After 24 h, they measured the concentration of amyloid-beta in the samples using Western blot and enzyme-linked immunosorbent assays.

Both analysis methods revealed a significant, dose-dependent reduction in amyloid-beta following incubation with [213Bi]-BiBPy, with plaque reduced to below the detection limits. Incubating the brain homogenate with free 213Bi also reduced levels of amyloid-beta, but to a significantly lesser extent. Other proteins in the homogenate were not affected, suggesting a lack of off-target damage.

The team found that a dose of 0.01488 MBq per picogram of amyloid beta was required to reduce amyloid by 50% in vitro. Mastren notes that this finding must now be investigated in vivo, as biological processes in a living brain differ from those in postmortem tissue. “However, this value gives a starting point for our in vivo studies,” she adds.

To confirm the targeted binding of [213Bi]-BiBPy, the researchers also examined 10 µm-thick brain tissue sections from the mice. They stained the sections with a fluorescent BPy probe (fluorescein-functionalized) and with thioflavin-S, an amyloid stain. Thioflavin-S revealed a dense presence of plaques, particularly in the cortex. The fluorescent BPy probe also stained plaques in the cortex, but less intensely and with more off-site binding. This finding highlights the need to investigate alternative targeting vectors to reduce white-matter binding.

The researchers conclude that TAT can significantly reduce amyloid-beta aggregates in vitro, paving the way for studies in live animals and eventually in humans. As such, they plan to start in vivo testing of TAT later this year.

“Initially, we will be looking at the biodistribution, ability to cross the blood–brain barrier, immune response to treatment and effects on plaque concentrations,” says Mastren. “If successful, we hope to follow up with testing cognitive response to treatment.”

The research is described in the Journal of Nuclear Medicine.

The post Could targeted alpha therapy help treat Alzheimer’s disease? appeared first on Physics World.

]]>
Research update Researchers demonstrate that targeted alpha-particle treatments can reduce the level of amyloid-beta plaque in mouse brain tissues https://physicsworld.com/wp-content/uploads/2024/08/20-08-24-Mastren-lab.jpg newsletter1
Multiple molecular hexaquarks are predicted by theoretical study https://physicsworld.com/a/multiple-molecular-hexaquarks-are-predicted-by-theoretical-study/ Mon, 19 Aug 2024 15:13:07 +0000 https://physicsworld.com/?p=116341 Exotic hadrons comprising six quarks could be observed in future experiments

The post Multiple molecular hexaquarks are predicted by theoretical study appeared first on Physics World.

]]>
Two types of hexaquark

Multiple hexaquarks – strongly interacting hadronic particles comprising six quarks – are likely to exist, according to a new theoretical study by four physicists in China and Germany. The hypothetical particles they considered contained strange (s) and charm (c) quarks. These are both heavy quarks whose presence usually makes hadrons very short-lived and difficult to study experimentally. However, evidence from accelerator experiments has already hinted at the existence of such hexaquarks, leading the team to believe that future experiments at facilities like the Large Hadron Collider (LHC) could validate their predictions.

“Hexaquarks are a type of exotic hadron, distinct from the more familiar baryons (which contain three quarks, like protons and neutrons) and mesons (which contain a quark-antiquark pair),” explains Bo Wang of Hebei University who collaborated on the research. “In general, there are two types of hexaquark states: one where six quarks are confined within a compact hadron, and another that consists of a molecular-like structure formed by two baryons [see figure]. Our [research] focuses on the latter type.”

In molecule-like hexaquarks, the constituent baryons are expected to be not as tightly bound by the strong interaction as the quarks within each baryon. This makes these hexaquarks particularly interesting for studying new aspects of the strong interaction that binds quarks together. This could help physicists better understand quantum chromodynamics – which is the theory that describes the strong interaction and is enormously challenging to implement in calculations.

In their new work, Wang and colleagues employed a combination of techniques used in previous hadron studies, incorporating specific parameters related to the strong interaction that were determined from earlier research on other exotic hadrons, such as tetraquarks (four-quark states) and pentaquarks (five-quark states).

Bag of quarks

“It can be easily inferred within our model that if the molecular tetraquarks and pentaquarks exist, then the molecular hexaquarks must also exist,” said Wang. “Experimental searches for these hexaquark states will help reveal whether nature prefers to construct higher-level structural units, namely hadronic molecular states, or whether it merely favours putting the quarks into a bag, meaning compact multiquark states.”

By applying a range of sophisticated techniques, the scientists were able to calculate the hexaquarks’ masses and lifetimes, which are among the most important parameters of elementary particles and play a primary role in their identification in experiments.

“We have developed a method that combines effective field theory and the quark model to describe the residual strong interactions between quarks,” explained Wang. “The parameters are determined using well-measured states, such as the Pc pentaquarks and the tetraquarks X(3872) and Zc(3900). Finally, the mass spectrum of the molecular-like hexaquark states is determined by solving the Lippmann-Schwinger equation.”

Using this approach, Wang and his colleagues explored various potential hexaquark configurations, all of which included not only the lighter up (u) and down (d) quarks found in protons and neutrons, but also the much heavier s and charm c quarks. Their theoretical models encompassed hexaquarks made from two identical baryons, a baryon paired with its antiparticle, and combinations of two different baryons.

Wealth of testable predictions

While only non-molecular hexaquark candidates have been observed experimentally so far, Wang a colleagues offer a wealth of testable predictions about the subtle properties of strong interactions, making these findings particularly significant.

“Several candidates for molecular-type tetraquarks and pentaquarks have been observed experimentally, notably by collaborations such as the LHCb, BESIII, and Belle, but no candidates for molecular-type hexaquark states containing heavy quarks have yet been found.” said Wang.

The researchers were also able to compare their findings with results from other methods, such as lattice quantum chromodynamics, where space is represented by a finite grid, enabling detailed calculations. In all cases where comparisons were possible, the results were consistent, lending further credibility to the team’s conclusions. However, only experimental evidence can provide definitive proof, and the researchers are optimistic that such confirmation is not far off.

“As future collider experiments are upgraded and established, they will undoubtedly generate a wealth of data for the study of hadronic physics,” concluded Wang. “Research into hadronic molecular states is currently one of the most vibrant areas of inquiry and is expected to remain so for the foreseeable future.”

“Our aspiration is to develop a theoretical framework that comprehensively describes the residual strong interactions, unifying the nuclear forces and interactions between heavy-flavour hadrons under a consistent model and set of parameters. This holds profound significance for our understanding of strong interactions, making the investigation of various properties of hadronic molecular states an excellent entry point for this endeavour.”

The research is described in Physical Review D.

The post Multiple molecular hexaquarks are predicted by theoretical study appeared first on Physics World.

]]>
Research update Exotic hadrons comprising six quarks could be observed in future experiments https://physicsworld.com/wp-content/uploads/2024/08/19-4-24-hexaquarks-list.jpg newsletter1
Quantum dot liquid scintillator could revolutionize neutrino detection https://physicsworld.com/a/quantum-dot-liquid-scintillator-could-revolutionize-neutrino-detection/ Mon, 19 Aug 2024 12:00:21 +0000 https://physicsworld.com/?p=116324 A new type of water-based scintillator made from quantum dots could make neutrino detectors safer and cheaper

The post Quantum dot liquid scintillator could revolutionize neutrino detection appeared first on Physics World.

]]>
Neutrino detectors contain up to tens of thousands of tonnes of liquid scintillator that emits a flash of light whenever it interacts with a neutrino. Such scintillators are typically organic compounds dissolved in organic solvents, so are toxic and highly flammable. By contrast, the water-based quantum dot liquid scintillator developed by a team headed up at King’s College London (KCL) in the UK, is non-toxic and non-flammable – making it less hazardous to work with, as well as more environmentally friendly.

Quantum dots (QDs) are tiny semiconductor crystals that confine electrons and behave like artificial atoms when absorbing and emitting light. The new scintillator contains commercially available 6.4 nm-diameter QDs – optimized to emit the blue light wavelengths preferentially detected by particle physics photon sensors – which the researchers dissolved in the organic solvent toluene before mixing with water and a stabilizing agent of oleic acid molecules.

This mixture was then “agitated to create an emulsion, similar to shaking a bottle of salad dressing to mix oil and vinegar,” explains Aliaksandra Rakovich, who co-led the research along with Teppei Katori. Finally, after settling, the water and oil phases separated and the water phase – now containing the QDs – was further diluted with water to reach the correct concentration for detecting particles such as neutrinos.

As detailed in their recent Journal of Instrumentation paper, the researchers measured the light emitted from a small sample of their liquid scintillator while cosmic rays (atmospheric muons) passed through it. This revealed a high scintillation yield, comparable to that from existing scintillators. The absorbance and emission spectra also remained stable over two years: an essential quality for neutrino experiments, which typically take several years to acquire data.

“The potential for our new scintillator is huge because quantum dots can have so many different types of core and different sizes, so you can choose all kinds of absorption and emission spectra,” says Katori, whose current work includes helping to design the Japan-based international Hyper-Kamiokande neutrino experiment due to start operating in 2027.

Katori hopes that within 5–10 years the new scintillator could not only replace those used in large-scale detectors for dark matter, neutrons or neutrinos, but could also form the basis for desktop-sized generic radiation sensors. It could also help monitor the neutrino spectrum close to the reactor core in nuclear power facilities: this spectrum alters if plutonium is being illegally extracted.

Next, the researchers aim to “develop methods for large-scale synthesis of QDs directly in water”, says Rakovich, adding that this will include removing cadmium and other toxic elements to “reduce the ecological footprint even further”. They also intend to carry out quantitative testing and optimization of stability, safety and performance in increasingly larger samples of their scintillator while under neutrino bombardment over long time scales.

Alex Himmel, a scientist at Fermilab in the USA, who was not involved in the research study, says that he finds this new scintillator promising. “For some time there has been substantial interest in making water-based liquid scintillators which have advantages in terms of safety and cost,” explains Himmel, who is co-spokesperson for Fermilab’s NOvA neutrino experiment, which currently uses an organic liquid scintillator.

“Safety is always a top concern when building particle physics experiments, both for the obvious reason that we don’t want anyone to get hurt, and because potentially dangerous materials typically require costly safety measures,” says Himmel. “If the materials themselves are less hazardous, it makes the experiments easier and cheaper to build and operate.”

Himmel says that the KCL researchers “estimate 4000 photons per MeV from their test sample”, noting that “our experiment operates today at similar light yields”. But he cautions that for this new liquid scintillator to be adopted by end-users it must “be produced cost-effectively at large scales and show a light yield that is stable over time”.

The post Quantum dot liquid scintillator could revolutionize neutrino detection appeared first on Physics World.

]]>
Research update A new type of water-based scintillator made from quantum dots could make neutrino detectors safer and cheaper https://physicsworld.com/wp-content/uploads/2024/08/19-08-24-KCL-QD-scintillator-experiment.jpg newsletter1
How ‘pop Newton’ can help inspire the next generation https://physicsworld.com/a/how-pop-newton-can-help-inspire-the-next-generation/ Mon, 19 Aug 2024 10:00:37 +0000 https://physicsworld.com/?p=115817 Richard Easther and Frank Wang argue that a "Newton first" approach can be better for undergraduates than focusing solely on "modern physics"

The post How ‘pop Newton’ can help inspire the next generation appeared first on Physics World.

]]>
The top two physics books on Goodreads are Stephen Hawking’s A Brief History of Time and Brian Greene’s The Elegant Universe. Both tomes focus on the quest for a “theory of everything” – physics so advanced it is not yet discovered. Much of the “shop window” of popular physics in bookshops is filled with ideas whose bewildering complexity underwrites their allure – strings, extra dimensions or multiverse cosmology. This is in contrast to music or art, where the classics tend to be more popular than avant-garde compositions.

For teachers and communicators of physics, it is easy to key into this fascination for novel ideas. After all, students are often more attracted to quantum mechanics than thermodynamics. Yet while relativity and quantum mechanics are classed as “modern physics” they are anything but. The decadus mirabilis in which quantum mechanics bloomed is a century old. The work of Erwin Schrödinger and Werner Heisenberg is now closer to Faraday’s discovery of electromagnetic induction than to the present day.

In the classroom, physics often gives far more space than other sciences to centuries-old ideas. As physicists, we know that new developments largely embrace and extend existing ideas. Indeed, about a third of most undergraduate first-year physics textbooks – statics, dynamics, circular motion and waves – are firmly “Newtonian”. But with a focus on new ideas, it can be a struggle to maintain students’ interest for the full depth of physics’ repertoire.

This need to make physics more appealing for newcomers was on our minds when we recently refreshed the University of Auckland’s physics curriculum. Beyond revamping the delivery of core material, we also challenged ourselves to create a “pop Newton” course that presents physics as a coherent whole and is open to any undergraduate, not just physics students.

The course treads similar ground to the well-known book – and widely taught course – Physics For Poets by Robert March, which is a breezy survey from Newton through to the Standard Model of particle physics, using only simple algebra. However, simply passing high-school algebra does not guarantee the fluency needed to draw insight from algebraic arguments.

Instead, we decided to work with metaphor and visualization, as happens already, for example, when describing black-hole mergers in an introductory astronomy course. There are many strategies to explain the nature of space–time without resorting to tensor calculus. Yet the simplicity of Newtonian mechanics seems to have prevented the development of similar explanatory tools when it comes to more everyday physics.

This was not so much physics for poets as it was physics via poetry.

From Newton to the LHC

Our approach began with two-body interactions on toy air-hockey tables. Students videoed collisions between plastic pucks and replicated them in a pre-programmed Javascript simulator that runs in a web browser. We explained that the simulator implemented Newton’s laws: nothing changes how it is moving unless it is pushed; the more you push the bigger the change, but the bigger the object the smaller the change; and if you push on something, it pushes back.

This exercise opened the door to discussions on a huge range of phenomena without using algebra, much less calculus. For example, does neutron decay make sense if it leaves only an electron and a proton? (Answer: it doesn’t.) Students could put many particles into the simulator and watch as their speeds take on the Maxwell–Boltzman distribution, showing the genesis of statistical mechanics. Mix one big particle and many little particles and then hide the little particles on screen and Brownian motion appears. This allowed us to replicate the arguments that led to the explanation of atoms.

Over a few weeks, we drew a conceptual line from the simplest two-body collisions through to CERN’s Large Hadron Collider. The emergent properties of many-body systems then led to a discussion of reductionist explanations of complex phenomena. We looked at materials science (including quantum mechanics), climate dynamics and infectious-disease transmission. We drew on the expertise of Auckland physicists who work on climate and who made key contributions to New Zealand’s COVID-modelling efforts. In that way, we also showcased the range of problems addressed via physics and its methods. Ironically, post-pandemic staffing constraints have made it difficult to replicate this pedagogical experiment. However, even as a one-off, it showed the clear value in an approach that foregrounds the coherence and historical sweep of physics.

Deep progress in physics is measured on a clock that ticks in centuries

We also wanted to avoid giving the impression that physics is “solved” at a fundamental level. Firstly, its applications continue to reshape the world. Quantum technologies are cutting edge, even though quantum mechanics existed alongside the Model T Ford and we can now illustrate Newton’s laws of motion with spacecraft as well as cannonballs. But it is true that a large majority of physicists are applying physics to new problems, rather than seeking “new physics”.

Deep progress in physics is measured on a clock that ticks in centuries. We believe we must highlight the long narrative arc of the field, which is a profound story of its own. While 95% of the universe is currently unknown to physics, the story comes full circle when we recall that the dark material in the universe is revealed in part via apparent inconsistencies with Newtonian mechanics on galactic scales.

And one last conclusion: a lab session with a roomful of students playing air hockey is noisy fun.

The post How ‘pop Newton’ can help inspire the next generation appeared first on Physics World.

]]>
Opinion and reviews Richard Easther and Frank Wang argue that a "Newton first" approach can be better for undergraduates than focusing solely on "modern physics" https://physicsworld.com/wp-content/uploads/2024/08/24-08-Forum-Pop-Newton-Air-hockey-187193696-shutterstock_Fer-Gregory.jpg newsletter
Physicists reveal the role of ‘magic’ in quantum computational power https://physicsworld.com/a/physicists-reveal-the-role-of-magic-in-quantum-computational-power/ Mon, 19 Aug 2024 09:19:17 +0000 https://physicsworld.com/?p=116250 Entanglement and magic interact in ways that impact quantum algorithms and physical systems

The post Physicists reveal the role of ‘magic’ in quantum computational power appeared first on Physics World.

]]>
Cartoon showing a landscape divided between entanglement and magic. The entanglement part of the landscape is green, with gently rolling terrain, and a computer hovering above it with a green tick mark. The magic part is filled with spiky black mountains and fiery red pits, and the computer hovering above it is in flames

Entanglement is a fundamental concept in quantum information theory and is often regarded as a key indicator of a system’s “quantumness”. However, the relationship between entanglement and quantum computational power is not straightforward. In a study posted on the arXiv preprint server, physicists in Germany, Italy and the US shed light on this complex relationship by exploring the role of a property known as “magic” in entanglement theory. The study’s results have broad implications for various fields, including quantum error correction, many-body physics and quantum chaos.

Traditionally, the more entangled your quantum bits (qubits) are, the more you can do with your quantum computer. However, this belief – that higher entanglement in a quantum state is associated with greater computational advantage – is challenged by the fact that certain highly entangled states can be efficiently simulated on classical computers and do not offer the same computational power as other quantum states. These states are often generated by classically simulable circuits known as Clifford circuits.

To address this discrepancy, researchers introduced the concept of “magic”. Magic quantifies the non-Clifford resources necessary to prepare a quantum state and thus serves as a more nuanced measure of a state’s quantum computational power.

Studying entanglement and magic

In the new study, Andi Gu, a PhD student at Harvard University, together with postdoctoral researchers Salvatore F E Oliviero of Scuola Normale Superiore and CNR in Pisa and Lorenzo Leone of the Dahlem Center for Complex Quantum Systems in Berlin, approach the study of entanglement and magic by examining operational tasks such as entanglement estimation, distillation and dilution.

The first of these tasks quantifies the degree of entanglement in a quantum system. The goal of entanglement distillation, meanwhile, is to use LOCC (local operations and classical communication) to transform a quantum state into as many Bell pairs as possible. Entanglement dilution, as its name suggests, is the converse of this: it aims to convert copies of the Bell state into less entangled states using LOCC with high fidelity.

Gu and colleagues find a computational phase separation between quantum states, dividing them into two distinct regimes: the entanglement-dominated (ED) and magic-dominated (MD) phases. In the former, entanglement significantly surpasses magic, and quantum states allow for efficient quantum algorithms to perform various entanglement-related tasks. For instance, entanglement entropy can be estimated with negligible error, and efficient protocols exist for entanglement manipulation (that is, distillation and dilution). The research team also propose efficient ways to detect entanglement in noisy ED states, showing their surprising resilience compared to traditional states.

In contrast, states in the MD phase have a higher degree of magic relative to entanglement. This makes entanglement-related tasks computationally intractable, highlighting the significant computational overhead introduced by magic and requiring more advanced approaches. “We can always handle entanglement tasks efficiently for ED states, but for MD states, it’s a mixed bag – while there could be something that works, sometimes nothing works at all,” Guo, Leone and Oliviero tell Physics World.

Practical implications

As for the significance of this separation, the trio say that in quantum error correction, understanding the interplay between entanglement and magic can improve the design of error-correcting codes that protect quantum information from decoherence (a loss of quantumness) and other errors. For instance, topological error-correcting codes that rely on the robustness of entanglement, such as those in three-dimensional topological models, benefit from the insights provided by the ED-MD phase distinction.

The team’s proposed framework also offers theoretical explanations for numerical observations in hybrid quantum circuits (random circuits interspersed with measurements), where transitions between phases are observed. These findings improve our understanding of the dynamics of entanglement in many-body systems and demonstrate that entanglement of states within the ED phase is robust under noise.

The trio say that next steps for this research could take several directions. “First, we aim to explore whether ED states, characterized by efficient entanglement manipulation even with many non-Clifford gates, can be efficiently classically simulated, or if other quantum tasks can be performed efficiently for these states,” they say. Another avenue would be to extend the framework to continuous variable systems, such as bosons and fermions.

The post Physicists reveal the role of ‘magic’ in quantum computational power appeared first on Physics World.

]]>
Research update Entanglement and magic interact in ways that impact quantum algorithms and physical systems https://physicsworld.com/wp-content/uploads/2024/08/15-08-2024-LISTING-Entanglement-vs-magic.png newsletter1
Heisenberg gets ‘let off the hook’ in new historical drama based on the Farm Hall transcripts https://physicsworld.com/a/heisenberg-gets-let-off-the-hook-in-new-historical-drama-based-on-the-farm-hall-transcripts/ Fri, 16 Aug 2024 09:56:58 +0000 https://physicsworld.com/?p=116246 Philip Ball reviews Farm Hall by Katherine Moar at the Theatre Royal Haymarket, London, which runs until 31 August 2024

The post Heisenberg gets ‘let off the hook’ in new historical drama based on the Farm Hall transcripts appeared first on Physics World.

]]>
As the Second World War reached its endgame in Europe in 1945, Allied forces advancing towards Berlin raced to round up German scientists who’d worked on the Nazis’ “Uranium Project” to harness nuclear fission. Code-named the Alsos mission, it picked up the likes of Max von Laue, Otto Hahn (who’d led the experiments to discover fission in 1938), Carl von Weizsäcker, and the head of the uranium work Werner Heisenberg.

The Allied military were eager to prevent those eminent scientists from falling into Russian hands. But the Americans leading Alsos had little idea of what to do with these researchers, despite the mission having Dutch physicist Samuel Goudsmit as its scientific leader. The British forces, however, offered to take them off their hands, flying the scientists to England where they were interred in a country house in Cambridgeshire called Farm Hall.

Held for six months from July 1945, the scientists were well provided for and free to talk among themselves. Unbeknownst to them, however, British intelligence had bugged the house to assess if these men could be trusted to co-operate in the post-war reconstruction of Germany. Heisenberg, as arrogant and superior as ever, dismissed the idea of any such eavesdropping. “I don’t think they know the real Gestapo methods”, he said. “They’re a bit old-fashioned in that respect.”

We know precisely what the interned scientists discussed because full transcriptions of their conversations have been available for more than three decades, first appearing in the book Operation Epsilon (IOP Publishing 1993). It’s an episode that cries out for dramatization. You have the scientists’ anxieties about what they faced next, the unfolding of bitter rivalries and blame games, and the denouement of the Hiroshima and Nagasaki bombs, news of which was met with horror and disbelief. What’s more, Farm Hall was already virtually a theatrical stage set.

Farm Hall

No wonder, then, that the production of Farm Hall at the Theatre Royal Haymarket in London, written by playwright and historian Katherine Moar, has several precedents. The events were first dramatized by David Sington in BBC TV’s Horizon programme in 1992 and later formed the subject of a 2010 BBC radio play. There’s also been Operation Epsilon – a 2013 play by US playwright Alan Brody that ran at the Southwark Playhouse in London only last autumn.

Moar’s own play premiered last year at London’s Jermyn Street Theatre before touring and now returning to the grander Haymarket. The issues were also searchingly explored in Michael Frayn’s Copenhagen (1998), which depicts the meeting of Heisenberg with Niels Bohr in Nazi-occupied Denmark in 1941. Those issues include the culpability of the scientists in building an atomic bomb for Hitler and the wider moral tensions between science, governance and warfare.

The Farm Hall transcripts are something of a straitjacket for the dramatist, since in effect the script is already written

The Farm Hall transcripts are a remarkable resource for historians trying to deduce the real intentions and achievements of the German physicists who worked on the Uranium Project. But they are something of a straitjacket for the dramatist, since in effect the script is already written. While Moar’s own dialogue is sprightly, she is thus not really able to shed new light on the events.

The roles given to the German scientists do not differ much from those of previous dramatizations. Heisenberg loftily considers himself the intellectual leader and the future hope for German science. Von Laue (who did not work on uranium) is scornful of the others’ attempts to justify their support for a depraved regime. Hahn feels personally responsible for the horrors of Hiroshima.

Kurt Diebner, who led a rival uranium research team and was a Nazi party member, clashes with Heisenberg, while the younger Erich Bagge frets about having also joined the party for the sake of career advancement. Von Weizsäcker is the jovial socialite in Moar’s version, working with his hero Heisenberg to construct an extenuating story for posterity.

The key question is why the Germans, with so much expertise in nuclear science, failed to get close to making a bomb, or even a self-sustaining nuclear pile (like the one built by Enrico Fermi at Chicago in 1942). Here historians are divided. Heisenberg, as Moar acknowledges, sought to find a story that absolved the Germans of moral failure while also denying that they got the physics wrong.

At first he refused to believe the announcement of the American bomb, on the grounds that they could not possibly have succeeded where he had failed. However, he later spun a story in which the physicists had cleverly persuaded the Nazis to support the scientific work without overpromising about delivery. Later, Heisenberg even implied that he and others had deliberately falsified the maths to sabotage the bomb project.

The latter idea was popularized in the journalist Thomas Powers’ 1993 book Heisenberg’s War: the Secret History of the German Bomb, which influenced Frayn’s play but for which there is no firm documentary evidence. In fact, the US historian Mark Walker has called that version of events “tragically absurd”. To my mind, Moar also lets Heisenberg off the hook too easily.

In a slightly arch play on the uncertainty principle – a motif that Frayn also used – she has Heisenberg deliver a final soliloquy in which he answers the question “Did you try to build a bomb?” with: “On some days yes. On others, no.” I find it more probable that the German scientists lacked the conviction that they could achieve their goal soon enough to make a difference to the war. Not having argued the case strongly, they were simply not given the resources to make much progress.

What matters more in retrospect is that so few of the scientists, including especially Heisenberg and Weizsäcker but also Hahn, took responsibility for what they had done under the Third Reich. Perhaps the only researcher who did, ironically, was Lise Meitner, who famously interpreted Hahn’s results as nuclear fission after she had fled Berlin in 1938 because of her Jewish heritage.

“You did not want to see it”, she later wrote to Hahn. “It was too inconvenient.”

The post Heisenberg gets ‘let off the hook’ in new historical drama based on the Farm Hall transcripts appeared first on Physics World.

]]>
Opinion and reviews Philip Ball reviews Farm Hall by Katherine Moar at the Theatre Royal Haymarket, London, which runs until 31 August 2024 https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Ball-Fram-Hall-photo-cast.jpg newsletter
Cryo-electron tomography reveals structure of Alzheimer’s plaques and tangles in the brain https://physicsworld.com/a/cryo-electron-tomography-reveals-structure-of-alzheimers-plaques-and-tangles-in-the-brain/ Fri, 16 Aug 2024 08:30:00 +0000 https://physicsworld.com/?p=116276 Researchers determine 3D architecture of the amyloid-beta and tau proteins that aggregate in the brain in Alzheimer’s disease

The post Cryo-electron tomography reveals structure of Alzheimer’s plaques and tangles in the brain appeared first on Physics World.

]]>
Imaging Alzheimer’s disease in the brain

Alzheimer’s disease is characterized by the abnormal formation of amyloid-beta peptide plaques and tau tangles in the brain. Although initially identified in 1907, the molecular structures and arrangements of these protein aggregates remain unclear. Now, a research team headed up at the University of Leeds has determined the 3D architecture of these molecules within a human brain for the first time, reporting the findings in Nature.

The researchers used cryo-electron tomography (cryo-ET) techniques to create 3D maps of tissues in a postmortem Alzheimer’s disease donor brain. They revealed the molecular structure of tau in brain tissue and the arrangement of amyloids, and identified new structures entangled within these pathologies.

“[These] detailed 3D images of brain tissue…for the first time bring clarity to the in situ organization of amyloid-beta and tau filament,” states Sjors Scheres, of the MRC Laboratory of Molecular Biology, in an accompanying commentary article.

Scheres explains that the research team had to overcome several major hurdles, including slicing thin enough brain tissue samples for electrons to pass through, freezing hydrated samples fast enough to prevent crystallization that can interfere with the cryo-ET imaging, and identifying relevant areas containing amyloid-beta and tau tangles to image.

For the study, lead author René Frank and colleagues examined freeze-thawed postmortem brain samples of the mid-temporal gyrus from an Alzheimer’s disease donor and a healthy donor. To identify areas of amyloid-beta and tau, they thawed the samples, sliced the brain tissues into 100–200 μm slices and added methoxy-X04 (a fluorescent dye that binds amyloid), before rapidly refreezing the samples.

The researchers then performed cryo-ET on 70-nm-thick tissue cryo-sections from a dye-labelled amyloid-beta plaque and a location enriched in tau tangles and threads. Using cryo-fluorescence microscopy to guide the cryo-ET, they acquired images from different angles and used these to computationally reconstruct a tomographic volume. They collected 42 tomograms in and around regions of amyloid-beta, 25 tomograms in regions containing tau tangles, plus 64 tomograms from the healthy brain tissue as controls.

To obtain higher-resolution structural information, the researchers picked subvolumes containing filaments for alignment and averaging. Subtomogram averaging of 136 tau filaments from a single tomographic volume generated the in situ structure of tau with 8.7 Å resolution.

The researchers report that the amyloid-beta plaques had a lattice-like architecture of amyloid fibrils interspersed with non-amyloid constituents, including extracellular vesicles, fragments of lipid membranes and unidentifiable cuboidal particles. Because these non-amyloid constituents were not present in healthy brain samples, they suggest that they are also a component of Alzheimer’s pathology, and may be related to amyloid-beta biogenesis or a cellular response to amyloid.

The amyloid-beta plaques also contained branched amyloid fibrils and protofilament-like rods. The team speculates that these branched fibrils and rods may contribute to the high local concentration of amyloid-beta that characterizes plaques.

Frank and colleagues also identified tau clusters within cells and in extracellular locations. The tau filaments were unbranched and arranged in parallel clusters. They observed both paired helical filaments and straight filaments, which did not mix randomly with each other, but tended to be close to filaments of the same type, often arranged with the same polarity. The researchers suggest that the non-random arrangement may be caused by interactions between filaments or growth in parallel from neighbouring focal points.

The collaboration – also including researchers at Amsterdam UMC, the University of Cambridge and Zeiss Microscopy – represents new efforts by structural biologists to study proteins directly within cells and tissues, to determine how proteins work together and affect one another, particularly in human cells and tissues affected by disease.

“The approaches for obtaining 3D molecular architectures and structures of human tissues with cryo-CLEM [cryo-correlated light and EM]-guided cryo-ET in Alzheimer’s disease sets the ground for interrogating other common dementias and movement disorders,” says Frank. “These include frontotemporal dementia, amyotrophic lateral sclerosis (motor neuron disease) and Parkinson’s disease.”

The post Cryo-electron tomography reveals structure of Alzheimer’s plaques and tangles in the brain appeared first on Physics World.

]]>
Research update Researchers determine 3D architecture of the amyloid-beta and tau proteins that aggregate in the brain in Alzheimer’s disease https://physicsworld.com/wp-content/uploads/2024/08/16-08-24-Alzheimer-imaging-featured.jpg
Quantum sensors monitor brain development in children https://physicsworld.com/a/quantum-sensors-monitor-brain-development-in-children/ Thu, 15 Aug 2024 15:19:52 +0000 https://physicsworld.com/?p=116291 This podcast explores how quantum technologies are revolutionizing medicine

The post Quantum sensors monitor brain development in children appeared first on Physics World.

]]>
Margot Taylor – director of functional neuroimaging at Toronto’s Hospital for Sick Children – is our first guest in this podcast. She explains how she uses optically-pumped magnetometers (OPMs) to do magnetoencephalography (MEG) studies of brain development in children.

An OPM uses quantum spins within an atomic gas to detect the tiny magnetic fields produced by the brain. Unlike other sensors used for MEG, which must be kept at cryogenic temperatures, OPMs can be deployed at room temperature in a simple helmet that puts the sensors very close to the scalp.

The OPM-MEG helmets are made by Cerca Magnetics and the UK-based company’s managing director joins the conversation to explain how the technology works. David Woolger also talks about the success the company has enjoyed since its inception in 2020.

Our final guest in this podcast is Stuart Nicol, who is chief investment officer at Quantum Exponential – a UK-based company that invests in quantum start-ups. He gives his perspective on the medical sector, talks about a company called Siloton that is making a crucial eye-imaging technology more accessible.

The post Quantum sensors monitor brain development in children appeared first on Physics World.

]]>
Podcasts This podcast explores how quantum technologies are revolutionizing medicine https://physicsworld.com/wp-content/uploads/2024/08/15-8-24-Cerca-helmet-list.jpg newsletter
Fermilab is ‘doomed’ without management overhaul claims whistleblower report https://physicsworld.com/a/fermilab-is-doomed-without-management-overhaul-claims-whistleblower-report/ Thu, 15 Aug 2024 11:00:18 +0000 https://physicsworld.com/?p=116236 A group of anonymous whistleblowers say that Fermilab is in 'crisis' and needs a management shake-up

The post Fermilab is ‘doomed’ without management overhaul claims whistleblower report appeared first on Physics World.

]]>
A group of self-styled “whistleblowers” at Fermilab, the US’s premier particle-physics facility, is claiming that the lab is in “crisis” and that “without a complete [management] shake-up” it is “doomed”. Published in the form of a 113-page “white paper” on the arXiv pre-print server, the criticism comes as the US Department of Energy (DOE), which funds Fermilab, is preparing to announce a new contractor to manage the day-to-day running of the lab.

The paper has been written by disgruntled staff members and visiting experimentalists, who in December 2023 set up a think tank to help Fermilab overcome what they called its “mission and physics impasses”. The authors, who are anonymous, say they have based their report on interviews and surveys of employees at the lab. It has, however, been formally signed by Giorgio Bellettini, who worked at Fermilab in the 1980s and 2010s, and neutrino physicist William Barletta from the Massachusetts Institute of Technology.

A Fermilab spokesperson told Physics World that the lab’s leadership is taking “seriously” the issues raised in the report and the current dissatisfaction among some staff. “They are assessing the situation and working to improve staff satisfaction,” the spokesperson says, adding that current director Lia Merminga conducted a staff climate survey when she took up office in 2022. That resulted in “some of the most pressing issues” being addressed and led to a “culture of excellence initiative” being established that will begin in full next year. Its goal is a “measurable improvement” in staff satisfaction within a year.

Limited operations

With more than 2000 staff, Fermilab has been managed since 2007 by Fermi Research Alliance (FRA) – a group that combines the University of Chicago and the Universities Research Association (URA). Serving the DOE’s Office of Science, the group’s remit is to guide the scientific direction of the lab. With the Tevatron proton-antiproton collider having been decommissioned in the 2010s, Fermilab is now repositioning itself as a leader in neutrino science.

The lab’s accelerator complex is currently undergoing a major upgrade for the $1.5bn Long-Baseline Neutrino Facility, which will study the properties of neutrinos in unprecedented detail and examine the differences in behaviour between neutrinos and antineutrinos. It will do so by sending neutrinos towards the Deep Underground Neutrino Experiment (DUNE) in a former gold mine in South Dakota some 1300 km away.

Hopefully, the [report] will raise an aggressive discussion within DOE and the lab management leading to substantial improvements in how the lab programme is presently conceived and performed

Giorgio Bellettini

Despite progress on this front, the lab has recently faced a number of challenges. In a 2021 assessment, the DOE gave Fermilab an overall mark of “B”, which fell below the required “B+”. Meanwhile DUNE gained only a “C”, mainly owing to delays and cost overruns. Complaints also emerged in 2022 over Fermilab continuing to restrict access to its campus for scientists and members of the public, despite COVID-19, which had prompted the original restrictions, having become less of a concern.

The [whistleblower] document asserts various challenges at Fermilab, some of which are inaccurate, and others of which [the Fermi Research Alliance] has been working hard to address for some time

Lia Merminga

Then in mid-June, Fermilab’s leadership told an all-hands meeting that the lab would close a significant part of its operations between 26 August and 8 September to reduce a budgetary shortfall. During that time staff would have to take their holidays. Following protests over the decision and “through the active engagement of DOE and FRA”, Fermilab later announced that, rather than closing, it would instead undergo “a limited operations period” for maintenance and repairs during the week of 26 August.

”The majority of Fermilab staff will be on leave and the lab will be closed to the public”, bosses declared.

“Too many deficiencies”

In the new whistleblower report, the group claims there are “too many deficiencies in the culture and behavioural areas” at Fermilab. They point, for example, to the lab’s dismissal of an early-career researcher in 2023 who had alleged sexual assault in 2018, and raised several cover-ups by management of dangerous behaviour. The report also highlights a case of guns being brought onto Fermilab’s campus in 2023; a male employee’s attack on a female colleague using an industrial vehicle in 2022; and retaliation against an employee who had predicted and warned management about the failures of beryllium windows.

The report in addition accuses FRA of lacking state-of-the art processes for business, finances and procurement. This “management ineffectiveness”, the whistleblowers charge, has caused a series of “self-inflicted problems” including unfilled positions in important scientific and administrative leadership positions; “serious” budget overruns and delays in key experiments; and several administrative obstacles that slow down or even stop experiments’ scientific productivity. The consequence, the report concludes, is “budget insolvency, with the lab being very much in the red”.

The whistleblowers also say they recently carried out a survey of Fermilab staff, which supposedly found that “a large fraction” are “unhappy” with management and are “desperately looking for change”. The survey, the authors claim, also revealed “poor communication between management and employees, and a decline of trust in management”.

“After so many years at Fermilab, I have developed a deep sentimental involvement with the laboratory, and I sense a diffused lack of confidence in our future,” writes Bellettini in a foreword to the report. “The data of the past 15 years show that responding to demands for a change by delaying any incisive action is not productive. It is not leading to a rousing vision for [high energy physics] in the United States.”

Calling for change

Although Merminga was unavailable for an interview with Physics World for this story, in a message to Fermilab’s employees on 29 July, which has been seen by Physics World, she stated that “the [whistleblower] document asserts various challenges at Fermilab, some of which are inaccurate, and others of which FRA has been working hard to address for some time”. Merminga added that she plans to discuss the issues with staff and then “communicate some of the progress we are making.”

The Fermilab spokesperson also states that access to the Fermilab site for both staff and members of the public “has improved significantly over the last year with updated and streamlined processes” in a bid to improve confidence and trust in the lab.

The issues at Fermilab are, however, also hindering the DOE, which earlier this year called for bids on the contract to operate the lab. The University of Chicago and the URA have submitted a contract bid together with other partners. Associated Universities, Inc., which runs the US-based 100 m-diameter Green Bank Telescope and the Atacama Large Millimeter/submillimeter Array in Chile, has also thrown its hat in the ring. The DOE says it will announce the winner of the contract by 30 September.

The whistleblowers, however, are calling for more than just a change of contractor. They say management should replace Merminga given that she has “[failed] to respond effectively to setbacks [and has] only made things worse”. “A new management team, one hopes, would be motivated to solve problems and would enjoy a ‘honeymoon period’, enabling them to make positive changes more easily,” the report states.

Bellettini, meanwhile, told Physics World that he has not received a response to the report from Fermilab’s management. “Hopefully, the [report] will raise an aggressive discussion within DOE and the lab management leading to substantial improvements in how the lab programme is presently conceived and performed,” he says. “The time to act is now.”

  • This article was amended on 15 August 2024 to clarify the role of the FRA’s constituent organizations in bidding for the next contract, and to correct the date that Lia Merminga became Fermilab’s director. It was also amended on 19 August 2024 to make clear the report included the views of visiting experimentalists at Fermilab.

The post Fermilab is ‘doomed’ without management overhaul claims whistleblower report appeared first on Physics World.

]]>
News A group of anonymous whistleblowers say that Fermilab is in 'crisis' and needs a management shake-up https://physicsworld.com/wp-content/uploads/2024/08/Fermilab-scaled.jpg
Superconductivity appears in nickelate crystals under pressure https://physicsworld.com/a/superconductivity-appears-in-nickelate-crystals-under-pressure/ Thu, 15 Aug 2024 08:30:48 +0000 https://physicsworld.com/?p=116249 Could nickel-oxide-based compounds be a new class of high-temperature superconductors?

The post Superconductivity appears in nickelate crystals under pressure appeared first on Physics World.

]]>
Diagram showing that as pressure increases, spin-charge order is suppressed and bulk superconductivity emerges in La4Ni3O10−δ

Researchers from Fudan University in Shanghai, China, report that they have discovered high-temperature superconductivity in trilayer single crystals of nickel-oxide materials under high pressure. These materials appear to superconduct in a different way than the better-known copper-oxide superconductors, and the researchers say they could become a new platform for studying high-temperature superconductivity.

Superconductors are materials that conduct electricity without resistance when cooled to below a certain critical transition temperature Tc. The first superconductor to be discovered was solid mercury in 1911, but its transition temperature is only a few degrees above absolute zero, meaning that expensive liquid helium coolant is required to keep it in the superconducting phase. Several other “conventional” superconductors, as they are known, were discovered shortly afterwards, all with similarly low values of Tc.

In the late 1980s, however, physicists discovered a new class of “high-temperature” superconductors that have a Tabove the boiling point of liquid nitrogen (77 K). These “unconventional” superconductors are not metals. Instead, they are insulators containing copper oxides (cuprates). Their existence suggests that superconductivity could persist at even higher temperatures, and perhaps even at room temperature – with huge implications for technologies ranging from electricity transmission lines to magnetic resonance imaging.

Nickel oxides could be good high-temperature superconductors

More recently, researchers identified nickel oxide materials – nickelates – as additional high-temperature superconductors. In 2019, a team at Stanford University in the US observed superconductivity in materials containing an effectively infinite number of periodically repeating planes of nickel and oxygen atoms. Then, in 2023, a team led by Meng Wang of China’s Sun Yat-Sen University detected signs of superconductivity in bilayer lanthanum nickel oxide (La3Ni2O7) at 80 K under a pressure of 14 gigapascals.

In the latest work, researchers led by Jun Zhao say that they have found evidence for superconductivity in a nickelate with the chemical formula La4Ni 3O10−δ (where δ can range from 0 to 0.04). Zhao and colleagues obtained this result by placing crystals of the material into a diamond anvil cell, which is a device that can generate extreme pressures of more than 400 GPa (or 4 x 106 atmospheres) as it squeezes the sample between the flattened tip of two tiny, gem-grade diamond crystals.

Evidence of superconductivity

In a paper published in Nature, the researchers report two pieces of evidence for superconductivity in their sample. The first is zero electrical resistance – that is, a complete disappearance of electrical resistance at a Tc of around 30 K and a pressure of 69 GPa. The second is the Meissner effect, which is the expulsion of a magnetic field.

“Through direct current susceptibility measurements, we detected a significant diamagnetic response, indicating that the material expels magnetic fields,” Zhao tells Physics World. “These measurements also enabled us to determine the superconducting volume fraction (that is, how much of the material is superconducting and whether superconductivity prevails throughout the material or just a small area). We found that it exceeds 80%, which confirms the bulk nature of superconductivity in this compound.”

The behaviour of this nickelate compound differs from that of the cuprate superconductors. For cuprates, Tc depends on the number of copper oxide layers in the material and reaches a maximum for structures comprising three layers. For nickelates, however, Tc appears to decrease as more NiO2 layers are added. This suggests that their superconductivity stems from a different mechanism – perhaps even one that conforms to the standard theory of superconductivity, known as BCS theory after the initials of its discoverers.

According to this theory, mercury and most metallic elements superconduct below their Tc because their fermionic electrons pair up to create bosons called Cooper pairs. This pairing occurs due to interactions between the electrons and phonons, which are quasiparticles arising from vibrations of the material’s crystal lattice. However, this theory usually falls short for high-temperature superconductors, so it is intriguing that it might explain some aspects of nickelate behaviour, Zhao says.

“That the layer-dependent Tc in nickelates is distinct from that observed in cuprates suggests unique interlayer coupling and charge transfer mechanism specific to the former,” says Zhao. “Such a unique trilayer structure provides a good platform to understand the role of this coupling in electron pairing and could allow us to better understand the mechanisms behind superconductivity in general and lead to the development of new superconducting materials and applications.”

A promising class of superconducting materials?

Weiwei Xie, a chemist at Michigan State University, US, who was not involved in this work, says that La4Ni 3O10−δ might indeed be a conventional superconductor and that the new study could help to establish nickel oxides as a promising class of superconducting materials. However, she notes that several recent papers claiming to have observed high temperature superconductivity in a different group of materials – hydrides – were later retracted because their findings could not be reproduced by independent research groups. “These papers are never far from our minds,” she tells Physics World.

In a News and Views article published in Nature, however, Xie strikes a hopeful note. “The (new) report has set the stage for a potentially fruitful path of research that could lead to an end to the controversy surrounding unreliable measurements,” she writes.

For their part, the Fudan University researchers say they now aim to identify other differences between the superconducting mechanisms in the nickelates and cuprates. “We will also be continuing to search for more superconducting nickelates,” Zhao reveals.

The post Superconductivity appears in nickelate crystals under pressure appeared first on Physics World.

]]>
Research update Could nickel-oxide-based compounds be a new class of high-temperature superconductors? https://physicsworld.com/wp-content/uploads/2024/08/superconductivity_315452609_Shutterstock_SIM-VA1.jpg
NIST publishes first set of ‘finalized’ post-quantum encryption standards https://physicsworld.com/a/nist-publishes-first-set-of-finalized-post-quantum-encryption-standards/ Thu, 15 Aug 2024 07:56:46 +0000 https://physicsworld.com/?p=116233 The algorithms are designed to withstand the attack of a quantum computer

The post NIST publishes first set of ‘finalized’ post-quantum encryption standards appeared first on Physics World.

]]>
A set of encryption algorithms that are designed to withstand hacking attempts by a quantum computer has been released by the US National Institute of Standards and Technology (NIST). The algorithms, which should also protect against the increasing threat of AI-based attacks, are the result of an eight-year effort by NIST. They contain the encryption algorithms’ computer code, instructions for how to implement them and details of their intended uses.

Encryption is widely used to protect the contents of electronic information, with encrypted data able to be sent safely across public computer networks because it is unreadable to all but its sender and intended recipient. Encryption tools rely on complex mathematical problems that conventional computers find difficult or impossible to solve. Quantum computers, however, could outperform their classical counterparts and crack current encryption methods.

In 2016 NIST announced an open competition in which researchers were invited to submit algorithms to be considered as a “post-quantum” cryptography (PQC) standard to stymie both conventional and quantum computers.  In 2022 NIST said that four algorithms would be developed further. CRYSTALS-Kyber protects information exchanged across a public network, while CRYSTALS-Dilithium, FALCON and SPHINCS+ concern digital signatures and identity authentication.

The three final algorithms, which have now been released, are ML-KEM, previously known as kyber; ML-DSA (formerly Dilithium); and SLH-DSA (SPHINCS+). NIST says it will release a draft standard for FALCON later this year. “These finalized standards include instructions for incorporating them into products and encryption systems,” says NIST mathematician Dustin Moody, who heads the PQC standardization project. “We encourage system administrators to start integrating them into their systems immediately.”

Duncan Jones, head of cybersecurity at the firm Quantinuum welcomes the development. “[It] represents a crucial first step towards protecting all our data against the threat of a future quantum computer that could decrypt traditionally secure communications,” he says. “On all fronts – from technology to global policy – advancements are causing experts to predict a faster timeline to reaching fault-tolerant quantum computers. The standardization of NIST’s algorithms is a critical milestone in that timeline.”

The post NIST publishes first set of ‘finalized’ post-quantum encryption standards appeared first on Physics World.

]]>
News The algorithms are designed to withstand the attack of a quantum computer https://physicsworld.com/wp-content/uploads/2024/08/quantum-circuit-concept-1206098096-iStock_Quardia.jpg
Atomic clocks on the Moon could create ‘lunar positioning system’ https://physicsworld.com/a/atomic-clocks-on-the-moon-could-create-lunar-positioning-system/ Wed, 14 Aug 2024 14:41:02 +0000 https://physicsworld.com/?p=116261 Lunar time standard would avoid pitfalls of time dilation

The post Atomic clocks on the Moon could create ‘lunar positioning system’ appeared first on Physics World.

]]>
Atomic clocks on the Moon. It might sound like a futuristic concept, but atomic clocks already abound in space. They can be found on Earth-orbiting satellites that provide precision timing for many modern technologies.

The clocks’ primary function is to generate the time signals that are broadcast by satellite navigation systems such as GPS. These signals are also used to time-stamp financial transactions, enable mobile-phone communications and coordinate electricity grids.

But why stop at orbits a mere 20,000 km from Earth’s surface? Should we establish a network of atomic clocks on the Moon? This is the subject of a new paper by two physicists at NIST in Boulder, Colorado – Neil Ashby and Bijunath Patla.

They say that their study was inspired by NASA’s ambitious Artemis programme, which aims to land people on the Moon as early as 2026. The duo points out that navigation and communications on and near the Moon would benefit from a precision time standard. One option is to use a time signal that is broadcast from Earth to the Moon. Another option is to create a lunar time standard using one or more atomic clocks on the Moon, or in lunar orbit.

Faster pace

The problem with using a signal from Earth is that a clock on the Moon runs at a faster pace than a clock on Earth. This time dilation is caused by the difference in gravitational potential at the two locations and is described nicely by Einstein’s general theory of relativity.

Using that theory, the NIST duo calculate that a clock on the Moon will gain about 56 µs per day when compared to a clock on Earth. What’s more, this rate is not constant because of the eccentricity of the Moon’s orbit and the changing tidal effects of solar-system bodies other than the Earth, which would also cause fluctuations in the difference between earthbound and Moon-bound clocks.

Because of these variations, the duo argue that it would be better to create a network of atomic clocks on the surface of the Moon – and in lunar orbit. This would provide a distributed system of lunar time, much like the distributed system that currently exists on Earth.

“It’s like having the entire Moon synchronized to one ‘time zone’ adjusted for the Moon’s gravity, rather than having clocks gradually drift out of sync with Earth’s time,” explains Patla. This could form the basis of a high-precision lunar positioning system. “The goal is to ensure that spacecraft can land within a few metres of their intended destination,” Patla says.

They also calculated the difference in clock rates on Earth and at the four Lagrange points in the Earth–Moon system. These are places where satellites can sit fixed relative to the Earth and Moon. There, clocks would gain a little more than 58 µs per day compared to clocks on Earth.

They conclude that atomic clocks placed on satellites at these Lagrange points could be used as time transfer links between the Earth and Moon.

The research is described in The Astronomical Journal.

The post Atomic clocks on the Moon could create ‘lunar positioning system’ appeared first on Physics World.

]]>
Blog Lunar time standard would avoid pitfalls of time dilation https://physicsworld.com/wp-content/uploads/2024/08/14-8-24-Lunar-positioning-system.jpg
Wearable PET scanner allows brain scans of moving patients https://physicsworld.com/a/wearable-pet-scanner-allows-brain-scans-of-moving-patients/ Wed, 14 Aug 2024 12:00:09 +0000 https://physicsworld.com/?p=116228 A helmet-like upright imaging device has potential to enable previously impossible neuroimaging studies

The post Wearable PET scanner allows brain scans of moving patients appeared first on Physics World.

]]>
Imaging plays a vital role in diagnosing brain disease and disorders, as well as advancing our understanding of how the human brain works. Existing brain imaging modalities, however, usually require the subject to lie flat and motionless – precluding use in people who cannot remain still or studies of the brain in motion.

To address these limitations, neuroscientists at West Virginia University have developed a wearable, motion-compatible brain positron emission tomography (PET) imager and demonstrated its use in a real-world setting. The device, described in Communications Medicine, could potentially enable previously impossible neuroimaging studies.

“We wanted to create and test a tool that could grant access to imaging the brain – including deep areas – while humans are moving around,” explains senior author Julie Brefczynski-Lewis. “We hope our device could allow the investigation of research questions related to natural upright behaviours, or the study of patients who are normally sedated due to movement issues or challenges in understanding the need to be perfectly still for a scan, which could happen with cognitive impairments or dementias.”

PET scans provide information on neuronal and functional activity by imaging the uptake of radioactive tracers in the brain. But clinical PET systems are extremely sensitive to motion and require supine (lying down) imaging and dedicated scanning rooms. There are neuroimaging techniques that can be used with patients upright and moving – such as functional near-infrared spectroscopy and high-density diffuse optical tomography – but these optical approaches only image the brain surface. Activity in deep brain structures remains unseen.

Julie Brefczynski-Lewis with the prototype AMPET device

To enable upright and motion-tolerant imaging, Brefczynski-Lewis and colleagues designed the AMPET (ambulatory motion-enabling positron emission tomography), a helmet-like device that moves along with the subject’s head. The imager is made from a ring of 12 lightweight detector modules, each comprising arrays of silicon photomultipliers coupled to pixelated scintillation crystal arrays. The imager ring has a 21 cm field-of-view and a central spatial resolution of 2 mm in the tangential direction and 2.8 mm in the radial direction.

Real-world scenarios

The researchers tested the AMPET device on 11 volunteer patients who were scheduled for a clinical PET scan on the same day. The helmet was positioned the on the participant’s head such that it imaged the top of the brain, comprising the primary motor areas. Although it only weighs 3 kg, the team chose to suspend the helmet from above so that participants would not feel any weight while moving their head.

Patients received a low dose (10–20% of their total prescription) of the metabolic PET tracer 18F-FDG. “We chose a very low dose that was within the daily dose for clinical patients,” says Brefczynski-Lewis. “Some applications may require a slightly higher dose for extra sensitivity, but because the detectors are so close to the head, a full clinical-like dose would not likely be necessary.”

Immediately after tracer injection, each participant underwent AMPET imaging for 6 min while they switched between standing still and walking-in-place every 30 s. Following a 5 min transition, subjects were then scanned for another 5 min while alternating between sitting still and lifting their leg while seated. In this second imaging session, the team moved the AMPET lower around the head for five participants, to image deeper brain structures.

Meeting the goals

The team defined three goals to validate the AMPET prototype: motion artefacts of less than 2 mm; differential activation of cortical regions of interest (ROIs) related to leg movement; and differential activation to walking movements in deep brain structures.

The walking versus standing task allowed the researchers to test for any motion of the imager relative to the head. They observed an average movement-related misalignment of just 1.3 mm. Analysis of task-related activity showed the expected brain image patterns during walking, with activity in ROIs that control leg movements significantly greater than in all other imaged ROIs.

In four participants where activity was measured from deep brain structures (the fifth had incorrect helmet placement), the team observed differential activation in various deep lying structures, including the basal nuclei.

The researchers note that one volunteer had a prosthetic right leg. While performing upright walking, his brain patterns showed greater metabolic activity in the area that represented the intact leg. In contrast, no difference in activity between left and right leg ROIs was measured in the other participants.

Brefczynski-Lewis tells Physics World that patients found the AMPET reasonably comfortable and did not feel its weight on their head or neck. Certain movements, however, were slightly inhibited, especially tilting the head towards the shoulders. “Our engineer collaborators recommended a gyroscope mechanism to enable free movement in all directions,” she says.

As well as validating the prototype, the study also identified upgrades required for the AMPET and similar systems. “The great thing about a real-world study on humans was that it showed us which logistics to optimize,” explains Brefczynski-Lewis. “We are developing a system for good placement and monitoring the alignment of the imager relative to the head, as well as widening the coverage to increase sensitivity, and testing a movement task using a bolus-infusion paradigm.”

The post Wearable PET scanner allows brain scans of moving patients appeared first on Physics World.

]]>
Research update A helmet-like upright imaging device has potential to enable previously impossible neuroimaging studies https://physicsworld.com/wp-content/uploads/2024/08/14-08-24-BrainScanner.jpg newsletter1
Goats, sports cars and game shows: the unexpected science behind machine learning and AI https://physicsworld.com/a/goats-sports-cars-and-game-shows-the-unexpected-science-behind-machine-learning-and-ai/ Wed, 14 Aug 2024 10:00:15 +0000 https://physicsworld.com/?p=115804 Matt Hodgson reviews Why Machines Learn by Anil Ananthaswamy

The post Goats, sports cars and game shows: the unexpected science behind machine learning and AI appeared first on Physics World.

]]>
An illuminated brain surrounded by tasks, depicting an AI brain

Artificial intelligence (AI) is rapidly becoming an integral part of our society. In fact, as I write this article, I have access to several large language models, each capable of proofreading and editing my text. While the use of AI can be controversial – who’s to say I really wrote this article? – it may soon be so commonplace that mentioning it will be as redundant as noting the use of a word processor. Just to be clear though, this review is all my own work.

It’s still early days for AI, but it has the potential to impact every aspect of our lives, from politics to education and from healthcare to business. As AI is used more widely, it’s not just computer scientists who need to understand how machines think and come to conclusions. Society as a whole must have a basic appreciation and understanding of how AI works to make informed decisions.

Why Machines Learn, by the award-winning science writer Anil Ananthaswamy, takes the reader on an entertaining journey into the mind of a machine. Ananthaswamy draws inspiration from the subject of his book; much like training a neural network, he uses well-designed examples to build the reader’s understanding step by step.

Whereas AI is a general term that covers human-like qualities such as reasoning and adaptive intelligence, the author’s focus is on the subfield of AI known as “machine learning”, which is all about how we extract knowledge from data. Starting with the fundamentals, Ananthaswamy carefully constructs a comprehensive picture of how machines learn complex relationships.

Anyone who thinks that AI is a modern invention, or that the road to today’s technology has been smooth, will be shocked to learn that the pursuit of a “learning machine” began in the 1940s with McCulloch and Pitts’ model of a biological neuron. Funding for this now billion-dollar industry has sometimes also been meagre. What’s more, the concepts underpinning modern AI have their roots in diverse and unexpected areas, from the idle curiosities of academics to cholera outbreaks and even game shows.

Ananthaswamy’s background in electronics and computer engineering is evident throughout this book, for example in how he introduces several technical and mathematical concepts needed to grasp the power and limitations of machine learning. He begins with the US psychologist Frank Rosenblatt’s development in the late 1950s of the “perceptron” – a basic, single-layer learning algorithm with a binary output (“yes” or “no”). The author then shows how decades of innovation have led to deep neural networks with billions of neurons, such as ChatGPT, capable of giving nuanced and insightful responses.

Unusually for a popular-science book, Why Machines Learn includes quite a lot of mathematics and equations.

Unusually for a popular-science book, Why Machines Learn includes quite a lot of mathematics and equations as it explores how vectors, linear algebra, calculus, optimization theory, statistics and probability can be employed to engineer a synthetic brain. Given the complexity of these topics, Ananthaswamy is careful to provide frequent recaps of key concepts so that readers less familiar with these ideas don’t become lost or overwhelmed.

Fortunately, the author has the uncanny ability to answer questions with simple and illuminating examples just as they arise in the reader’s mind. Although this might seem to diminish some of the magic, it’s inspiring to see how such a powerful system can be constructed from deceptively simple components.

Gaming the system

For topics such as probability, this pedagogical approach is essential for anyone without a strong background in mathematics. For instance, the infamous Monty Hall problem is so counterintuitive that even some of the world’s most renowned mathematicians struggled to accept its solution. Indeed, as Ananthaswamy notes, Paul Erdős – one of the most prolific mathematicians of the 20th century – “reacted as if he’d been stung by a bee” when confronted with the answer.

Illustration of the Monty Hall problem, showing doors containing a car or goats, and what happens when each door is chosen

The dilemma is based on the classic US TV game show Let’s Make a Deal, hosted by Hall, in which you, the contestant, have a choice of three doors. Behind one is a brand-new sports car, while the other two doors conceal goats. After you pick a door (but don’t get to see what’s behind it) the host opens one of the other two doors to reveal a goat. The host then offers you the chance to switch your choice to the remaining, unopened door.

To maximize your chances of winning the car, you should always switch. It’s counter-intuitive and controversial. Surely, if you’ve got two doors to pick from, then the odds of getting the car must be 50–50? In other words, why would you bother switching if there’s a car behind one door and a goat behind the other? The book goes on to explain how the logic behind this decision is a cornerstone of machine learning; bypassing human intuition, which is often flawed.

Ananthaswamy shows that machine learning is a powerful method of analysis. He first demonstrates how data can be represented in an abstract, high-dimensional space, before explaining how collapsing the number of dimensions allows patterns in the data to be found. With the assistance of some elegant linear algebra, this process can be engineered so that the data is categorized most effectively; highlighting strong correlations, which leads to more reliable performance.

Cautious embrace

In our data-driven world, this remarkable capability of AI is something that should be embraced. However, like any data-analysis method, machine learning is prone to biases and errors. Why Machines Learn gives the reader an awareness and understanding of these shortcomings, allowing AI to be used more effectively.

This is particularly important as AI becomes mainstream. It’s common for people to mistakenly believe that it’s some all-powerful super brain, capable of completing any task with ease. This misconception can lead to the misuse of AI or the unfair perception of AI as overrated when it falls short of this unrealistic standard. Ananthaswamy gives his readers an appreciation of how machine learning works and, hence, how to use it appropriately, which may help combat the abuse of AI.

It’s evident that machines are far from achieving human-like intelligence

By exploring the fundamental principles of machine learning in such detail, it’s evident that machines are far from achieving human-like intelligence. While the secrets of the human brain remain elusive, Why Machines Learn demystifies the underlying mechanisms behind machine learning, which may possibly lead to a better understanding of the learning process itself and the development of improved AI.

This inevitably requires the reader to confront advanced and counterintuitive concepts in various branches of mathematics and logic, from collapsing dimensions to mind-bending games of chance. For those who invest the time and effort, they will reap the rewards that come from understanding a technology with the potential to revolutionize many aspects of our lives.

  • 2024 Penguin/Allen Lane 480pp £30hb/£16.99 ebook

The post Goats, sports cars and game shows: the unexpected science behind machine learning and AI appeared first on Physics World.

]]>
Opinion and reviews Matt Hodgson reviews Why Machines Learn by Anil Ananthaswamy https://physicsworld.com/wp-content/uploads/2024/07/2024-08-Hodgson_montyhall_feature.jpg newsletter
Quantum oscillators fall in synch even when classical ones don’t – but at a cost https://physicsworld.com/a/quantum-oscillators-fall-in-synch-even-when-classical-ones-dont-but-at-a-cost/ Wed, 14 Aug 2024 08:57:27 +0000 https://physicsworld.com/?p=116222 For large-scale systems, classical oscillators are more energy-efficient, say theorists

The post Quantum oscillators fall in synch even when classical ones don’t – but at a cost appeared first on Physics World.

]]>
The synchronized flashing of fireflies in summertime evokes feelings of marvel and magic towards nature. How do they do it without a choreographer running the show?

For some physicists, though, these natural fireworks also raise other questions. If the fireflies were quantum, they wonder, would they synchronize their flashing routines faster or slower than their classical counterparts?

Questions of this nature – about how quantum systems synchronize, the energetic costs they pay to do so, and how long it takes them to fall into lockstep – have long bedevilled physicists. Now a team of theorists in the US has begun to come up with answers.  Writing in Physical Review Letters, Maxwell Aifer and Sebastian Deffner of the University of Maryland Baltimore County (UMBC), together with Juzar Thingna of the University of Massachusetts, Lowell (UMass Lowell), present a new take on the energetic cost and time required to synchronize quantum systems. Among other findings, they conclude that quantum systems can synchronize in scenarios where such behaviour would be impossible classically.

How quantum springs synchronize

Studies of synchronization go back to the 1600s, when Christiaan Huygens documented that pendulums placed on a table eventually sway in unison. Huygens called this the “sympathy of pendulums”.

This apparent sympathy between systems – chirping crickets, flashing fireflies, the harmonious firing of pacemaker cells in our hearts – turns out to be ubiquitous in nature. And while it may look like magic, it ultimately stems from information exchanged between individual systems via communication pathways (such as the table in the case of Huygens’ pendulums) available in the shared environment.

“At its core, synchronization is about balance of forces,” Thingna says.

To understand how that balance works, imagine you have a bunch of systems moving in circles of different sizes. The radius of the circle corresponds to the amount of energy in that system.

At first, the systems may all be moving at different paces: some faster, others slower. To synchronize, the circles must interact in such a way that gradually, the radius of all the circles and the pace of the systems becomes the same – meaning that bigger circles must “leak” energy and smaller circles gain it.

But synchronization is impressive only when it is resilient and robust. This means that if there are small disturbances – for example, if one of the systems is kicked out of its circle – the disturbed system should return to the radius and pace of the others.

This picture works for classical systems, but synchronization in the quantum regime is more complex. “The challenge lies in translating the classical concept of synchronization to the quantum world where trajectories are ill-defined concepts due to Heisenberg’s uncertainty principle,” explains Christopher Wächtler, a quantum physicist at the University of California, Berkeley, US who was not involved in this work.

A diagram showing the system of coupled pendulum-like oscillators and a pair of plots showing synchronization in the classical and quantum regimes

Taking inspiration from an experimental setup, the UMBC-UMass Lowell team created a model based on quantum oscillators or springs (a well-known quantum system) that interact with each other via a biased channel – an anti-Hermitian coupling in which one oscillator is favoured more than the other. This biased channel controls the flow of energy in and out of the individual springs. The oscillators also leak information by “talking” to a common thermal environment at a given temperature.

Thanks to this combination of a common thermal environment and a biased inter-system communication channel, the team was able to balance the information flow (that is, the communication between the oscillators and communication with the environment) and synchronize quantum systems in a way similar to how classical systems are synchronized.

The economics of sympathy

This approach is unusual because quantum synchronization research typically explores the quantum systems in their synchronized state after they have been coupled for a long time. In this case, however, the researchers focus on the time before the steady state has been reached, “which in my opinion is an important question to ask,” says Christoph Bruder, a physicist at the University of Basel, Switzerland who was not involved in the study.

To estimate the time it takes to synchronize, the UMBC-UMass Lowell researchers use quantum speed limits, which are a mathematical way of deriving the maximum time it takes a system to go from an initial state to a desired final state. They find that quantum oscillators synchronize the fastest when the conversations between oscillators do not leak – that is, the strength of the interaction between the oscillators outweighs the interaction with the environment.

The team also used ideas from quantum thermodynamics to identify a lower bound on the energetic cost of synchronization. This bound depends on the biased way in which the oscillators talk to each other.

But there is no free lunch.

While synchronizing a small number of quantum systems is energetically more efficient than doing the same for a classical counterpart, the researchers report that this is not scalable. When there are many systems, classical systems are more energy efficient than quantum ones. However, the researchers found that a model system does exhibit quantum synchronization for a wider range of interaction strengths than is the case for classical oscillators, making synchronization in the quantum regime more resilient and robust.

Though the work is still theoretical at this point, Wachtler says that a minimal version of the team’s model could be “effectively implemented in a lab”. The team is keen to explore this further. “For us, this is the first stepping-stone towards this goal of how to make synchronization more practical,” Thingna says.

The post Quantum oscillators fall in synch even when classical ones don’t – but at a cost appeared first on Physics World.

]]>
Research update For large-scale systems, classical oscillators are more energy-efficient, say theorists https://physicsworld.com/wp-content/uploads/2024/08/fireflies-glowing-1172936455-Shutterstock_Fer-Gregory.jpg newsletter1
DUNE prototype detector records its first accelerator-produced neutrinos https://physicsworld.com/a/dune-prototype-detector-records-its-first-accelerator-produced-neutrinos/ Tue, 13 Aug 2024 14:04:06 +0000 https://physicsworld.com/?p=116215 Scientists will use the detector to study the interactions between antineutrinos and argon

The post DUNE prototype detector records its first accelerator-produced neutrinos appeared first on Physics World.

]]>
A prototype argon detector belonging to the Deep Underground Neutrino Experiment (DUNE) in the US has recorded its first accelerator-produced neutrinos. The detector, located at Fermilab near Chicago, was installed in February in the path of a neutrino beamline. After what Fermilab physicist Louise Suter calls a “truly momentous milestone”, the prototype device will now be used to study the interactions between antineutrinos and argon.

DUNE is part of the $1.5bn Long-Baseline Neutrino Facility (LBNF), which is designed to study the properties of neutrinos in unprecedented detail and examine the differences in behaviour between neutrinos and antineutrinos. Construction of LBNF/DUNE began in 2017 at the Sanford Underground Research Facility in South Dakota, which lies some 1300km to the west of Fermilab. When complete, DUNE will measure the neutrinos generated by Fermilab’s accelerator complex.

Earlier this year excavation work was complete on the two huge underground spaces that will be home to DUNE. Lying 1.6km below ground in a former gold mine, the spaces are some 150 m long and seven storeys tall and will house DUNE’s four neutrino detector tanks, each filled with 17 000 tonnes of liquid argon. DUNE will also feature a near-detector complex at Fermilab that will be used to analyze the intense neutrino beam from just 600 m away.

The “2×2 prototype” detector, so-called because it has four modules arranged in a square, record particle tracks with liquid argon time-projection chambers to reconstruct a 3D picture of the neutrino interaction.

“It is fantastic to see this validation of the hard work put into designing, building and installing the detector,” says Suter, who co-ordinated installation of the modules.

It is hoped that the DUNE detectors will become operational by the end of 2028.

The post DUNE prototype detector records its first accelerator-produced neutrinos appeared first on Physics World.

]]>
News Scientists will use the detector to study the interactions between antineutrinos and argon https://physicsworld.com/wp-content/uploads/2024/08/DUNE.jpg
Spot the knot: using AI to untangle the topology of molecules https://physicsworld.com/a/spot-the-knot-using-ai-to-untangle-the-topology-of-molecules/ Tue, 13 Aug 2024 13:00:11 +0000 https://physicsworld.com/?p=115847 Solving a centuries-old mathematical puzzle could hold the key to understanding the function of many of the molecules of life

The post Spot the knot: using AI to untangle the topology of molecules appeared first on Physics World.

]]>
Any good sailor knows that the right choice of knot can mean the difference between life and death. Whether it hoists the sails or secures the anchor, a rope is only as good as the knot that’s tied in it. The same is true, on a much smaller scale, for many of the molecules that keep us alive.

Proteins are essential building blocks for all living things, and these long chains of amino acids form complex 3D shapes that allow molecules to fit together. For a long time, it was thought that while proteins can be highly tangled, they could not form knots under normal conditions, as this would prevent the proteins from being able to fold. But in the 1970s researchers found many topologically knotted proteins, in which their native structures are arranged in the form of an open knot.

As it happens, despite proteins (and even DNA) having “open” curves, knots can still form and affect their function. Indeed, they comprise about 1% of proteins in the Protein Data Bank. Unlike a rope or string, each protein of this type has a characteristic knot (figure 1). The largest group of knotted proteins is the SPOUT family of enzymes (which make up the second largest of seven structurally distinct groups of methyltransferases enzymes), all but one of which are knotted in a “trefoil” of three overlapping rings.

1 Knots for life

Some proteins form well-defined knotted structures, as shown above, where the lower image shows a simplified view of each molecule. The number below each image indicates the number of times the protein crosses itself and the + and – indicate that they are mirror images. The –31 and +31 for example are mirror image instances of the “trefoil” knot. Proteins form “open knots” because their two ends don’t join up. However, it is often still possible to define a knotted structure in the molecule.

This discovery raised many questions, such as how and why these knots form, what is the mechanism of their folding, and what role this might play on a functional level. There is some evidence that knotted proteins are more resistant to extreme temperatures, but scientists still do not know how abundant knots are in molecular structures or exactly how knotting affects their biological function.

The trouble is that when we try to apply what we know about knots to questions in biology and soft matter, we come up against a mathematical problem that’s been confounding scientists for over a century.

A tangled history

The origins of modern knot theory are often traced back to a famous experiment that was performed more than 150 years ago – not with ropes or string, but with smoke.

In 1867 Peter Guthrie Tait invited his friend and fellow physicist William Thomson (later Lord Kelvin) to travel from Glasgow to Edinburgh to witness a demonstration where he generated pairs of smoke rings. To Kelvin’s surprise, these rings were remarkably stable, travelling across the room and even bouncing off each other as if they were made of rubber. A smoke ring is a “vortex ring” in which the aerosols and particulates are rotating in small concentric circles, and this motion gives the ring its stability.

At the time, it was widely believed that the universe was pervaded by a space-filling substance dubbed “aether”, through which gravitational and electromagnetic radiation propagated. Kelvin reasoned that atoms might be made from stable vortices, like smoke rings, in this aether. He further argued that knots tied in aether vortex rings could account for the different chemical elements.

The vortex theory of atoms was incorrect, but knot theory continues to this day as a branch of mathematics

Tait was intrigued by Kelvin’s theory. Over a period of 25 years, and with the help of the Church of England minister Thomas Kirkman, American mathematician Charles Little and James Clerk Maxwell, Tait produced a table of 251 knots with up to 10 crossings (figure 2).  The vortex theory of atoms was incorrect, but knot theory continues to this day as a branch of mathematics.

2 Order and disorder

The first seven orders of knottiness

Peter Guthrie Tait and other early knot theorists spent years compiling a comprehensive list of knots. The above image is extracted from their table of knots up to seven crossings – “the first seven orders of knottiness”.

Spot a knot

For Tait and his fellow theorists, the classification of knots was painstaking work. Every time a new knot was proposed, they had to check that it was unique using drawings and geometric intuition. Tait himself wrote that “though I have grouped together many widely different but equivalent forms, I cannot be absolutely certain that all those groups are essentially different from one another”. Indeed, in 1974 Kenneth Perko showed that two entries in the original table are actually the same knot – these are now known as the “Perko pair” (Proc. Amer. Math. Soc. 45 262).

If you need any more convincing, my student Djordje Mihajlovic has developed an online game called “Spot a Knot” where the goal is to spot equivalent knots from pictures (figure 3). Even after years of researching knots, I often get it wrong. To earn a spot in the table, a knot must have a unique topology, meaning that it cannot be deformed into any other known knot without being broken. As the Perko pair and Mihajlovic’s game show, proving that two knots are different is easier said than done. Remember that topology studies the properties of spaces that do not change if they are deformed smoothly; to a topologist, a mug is equivalent to a doughnut because one can be massaged into the other without losing the inner hole.

3 Brain teaser

Figure 3

To illustrate the difficulty of identifying knots, Djordje Mihajlovic – a PhD student at the University of Edinburgh – developed an online game called “Spot a Knot”. One question is reproduced above. Does the top image correspond to a, b, c, d or e?

As scientists learned more about the structure of the atom, the vortex atom model was gradually abandoned. A final blow came in 1913 when Henry Moseley showed that chemical elements are differentiated not by their topology but by the number of protons in the nucleus.

In knot theory, quantities that describe the properties of knots are called “invariants”. The dream of knot theorists is to find a quantity like the proton number that can classify any knot based on its topology. Such a “complete invariant” would yield a unique value for every unique knot, and wouldn’t change if the knot were smoothly deformed.

A recipe for such a topological invariant could be something like this: “Walk along the knot and label each of the n crossings with numbers 1, 2, 3, …, 2n (you will traverse the knot twice). If the label is even and the line is an overcrossing, then change the sign of the label to minus (figure 4). At the end, each crossing will be labelled by a pair of integers, one even and one odd. The series of even integers is a code for the knot.” This recipe is called the Dowker–Thistlethwaite code, first proposed in 1983 (Topology and its Applications 16 19) (figure 5).

The Dowker–Thistlethwaite code can classify many simple knots, but like every other method that’s been proposed, it isn’t a complete invariant. The first knot invariant was proposed in 1928 by James W Alexander and called the Alexander polynomial. Since then, many others have been developed, but for each one, a case has been found where it fails to make a unique classification.

Taking a walk

The Alexander polynomial belongs to the family of so-called “algebraic invariants”. It is computed by constructing a matrix with as many rows and columns as there are crossings in the knot, and taking its determinant. Algebraic invariants are constructed from a 2D projection of the knot. This is a bit like a shadow, but one where we can discern which part of the loop is on top each time it crosses itself.

Soft-matter physicists like myself, however, want to classify the knots in molecules like proteins and DNA, which are 3D and constantly jostled by thermal energy. Reducing these molecules to 2D projections erases spatial features that may be crucial to their function.

An attractive alternative for characterizing molecules is “geometric invariants”. These are calculated by traversing the knot in 3D and computing some geometric property, such as the curvature, along the route.

One such invariant that I am fond of is the “writhe”, which was introduced by Tait. Writhe can be measured on a 2D projection by counting the “over” and “under” crossings and subtracting one from the other (figure 4b).

4 Over and under

Figure 4

One way to tell the difference between knots is to measure the “writhe”, which quantifies the amount of twisting. (a) Each time the knot crosses itself, the crossing can be characterized as either an overcrossing (left) or an undercrossing (right). The writhe is calculated by subtracting the number of undercrossings from the number of overcrossings.

(b) How the writhe is calculated for two knots – the cinquefoil knot (left), which has a writhe of +5, and the figure-eight knot (right), which has a writhe of 0.

(c) The writhe can also be calculated as a geometric quantity on a 3D molecular knot such as a protein. The geometric writhe can be calculated over the entire knot or as a local quantity between short, adjacent strands. A high value of the “local writhe” indicates that the strands are entangled with each other. Davide Michieletto and colleagues showed that a neural network trained on the local writhe characterizes knot topology with high accuracy.

However, writhe can also be computed as a geometric quantity. Imagine walking along a 3D knot, such as a protein, and at each step writing down an estimate of the writhe by counting the crossings you can see. At the end of your journey, the average of these numbers will yield the true value of the writhe. Unfortunately, writhe isn’t a complete invariant. In fact, like its algebraic counterparts, no geometric invariant has ever been proved to uniquely classify all knots.

In 2021 Google DeepMind’s AlphaFold artificial-intelligence programme solved a problem that had been evading scientists for decades – how to predict a protein’s structure from its amino acid sequence (Nature 596 583). The function of proteins depends on their 3D structure, so AlphaFold is a powerful tool for drug discovery and the study of disease.

The question we asked ourselves was: could AI do the same for the knot invariant problem? 

Wriggle and writhe

Using AI to classify knots has been explored by previous researchers, most recently by Olafs Vandans and colleagues of the City University of Hong Kong in 2020 (Phys. Rev. E 101 022502) and Anna Braghetto of the University of Padova and team in 2023 (Macromolecules 56 2899). In those studies, they treated the different knots like strings of beads and trained a neural network to identify them by giving it the Cartesian coordinates and, in the latter case, the vector, distance and angles between the beads.

5 Encoding knots

Figure 5

The Dowker–Thistlethwaite notation is a knot-invariant first proposed in 1983. This method assigns a sequence of integers to a knot by traversing it twice and assigning a number to each crossing, as shown in the image. The final sequence characterizes the knot.

 

 

 

 

These researchers achieved high accuracies, but only for the five simplest knots. We wanted to extend this to much more complicated topologies, while also simplifying the neural network architecture and using a smaller training dataset.

To do this we took inspiration from nature. In our bodies, knots in DNA are untangled by specialist enzymes called topoisomerases. These enzymes cut and reattach DNA strands and they can effectively smooth out knots despite being about a thousand times smaller than a DNA molecule.

We hypothesized that the topoisomerases can sense some local geometric property that allows them to locate the most tightly knotted part of the DNA molecule. We tried to do this ourselves using various quantities including the density and the curvature. In the end our results led back to the beginning – to Tait and his geometric writhe.

We decided that giving our AI the local writhe would give it the best chance to successfully identify complex knots

As well as calculating writhe over an entire knot, we can also measure it as a local quantity that tells us how much segment x is entangled with nearby segment y (figure 4c). We found that local writhe is a remarkably effective way to locate knotted segments in long, looping molecules (ACS Polymers Au 2 341). Based on this result, we decided that giving our AI the local writhe would give it the best chance to successfully identify complex knots.

Armed with our theory, we began building a neural network to test it. To start, we generated a training dataset by simulating the thermal motion of the five simplest knots, extracting tens of thousands of conformations (figure 6a).

We then trained two neural networks: one using the Cartesian coordinates of the knots and one using the local writhe. In each case, we supervised the AI, and used a subset of our training dataset to tell the neural networks what type each of the knots was. To test our method we asked the neural networks to classify conformations of these simple knots that they hadn’t seen before.

When the AI was trained on the Cartesian coordinates on a simple neural network, it made a correct categorization only four times out of five, similar to what Vandans and Bragetto found. This is probably better than the score most of us would get in the Spot a Knot game, but it’s still far from perfect.

However, when the neural network was trained on the local writhe, the difference was staggering: it could correctly classify the knots with more than 99.9% accuracy.

Tougher challenges

Though I was surprised by this result, the identification of the five simplest knots is relatively trivial, and can be achieved using existing invariants (or an extremely eagle-eyed Spot a Knot player).

We decided to give the neural network a much trickier challenge. This time it would only have to classify three knots rather than five, but we had chosen them carefully: the Conway knot, the Kinoshita–Terasaka (KT) knot and the unknot – the simplest of all knots. The first two have 11 crossings, and are “mutants” of each other because they are identical except in one region where the knot is “flipped”. They share many knot invariants, and they also share some invariants with the unknot.

6 Spot the difference

Figure 6

A complete knot invariant shouldn’t change when a knot is smoothly deformed, but should return a different result for topologically distinct structures. Do the two pictures in a show the same knot? It’s often difficult for human intuition to tell knots apart. In fact, the two pictures show two slightly different structures – the Conway and Kinoshita–Teresaka knots. Because it’s difficult to tell them apart, these two knots can be used to test a knot-characterization neural network.

The images in b show different configurations of two knots – the 51, or cinquefoil knot (above) and the 72 knot (below). In Davide Michieletto and colleagues’ work on neural networks, the cinquefoil was part of the first training dataset and the 72 was included in the larger dataset.

What we discovered is that the Conway and KT knots were indistinguishable for a neural network trained on Cartesian coordinates but they could be identified 99.9% of the time by the neural network trained on the local writhe.

The final test was to apply this training to a much larger pool of knots. We ran simulations of 250 types of knots, with up to 10 crossings (figure 6b). When the neural network was trained with the Cartesian coordinates it made a correct classification only one time out of five. By contrast, our best local-writhe-trained neural network could classify all 250 knots in a matter of seconds with 95% accuracy, much better than any other algorithm or single topological invariant (Soft Matter 20 71).

A final twist

Without knowing anything about knots or knot theory, our neural network had taught itself to do something that has long evaded human intuition. In fact, we are still working to open the “black box” and understand what exactly it discovered.

We have found that to distinguish the five simplest knots, the neural network takes every set of pairs of points on the knot and multiplies the writhe at the two points together. What’s intriguing is that this quantity is equivalent to an existing invariant called the “Vassiliev invariant of order two”.

Vassiliev invariants are computed by multiplying pairs, triplets, quadruplets, up to n-tuples of the local writhe matrix. Incidentally, the Vassiliev invariant of order 2 is also the coefficient of the quadratic term of the Conway polynomial, the algebraic invariant we saw earlier. It’s been proposed, though never proved, that the complete set of Vassiliev invariants, which can be computed as an integral, is the long-searched-for complete invariant.

We were excited to find that as it’s presented with more complex knots, the neural network adapts by computing Vassiliev invariants of higher order

We were therefore excited to find that as it’s presented with more complex knots, the neural network adapts by computing Vassiliev invariants of higher order. For instance, to uniquely classify the first five knots, the neural network requires only the degree two Vassiliev invariant. But for the 250-knot dataset, it may compute the Vassiliev invariants up to order three or four.

Geometric and algebraic invariants are computed using very different mathematics, so it’s exciting that AI can discover connections between them, and this brings us a step closer to discovering a complete invariant.

Knotting else matters

In only three years, AlphaFold has generated millions of proteins, most of which have yet to be fully studied. In 2023 a group led by Joanna Sulkowska of the University of Warsaw predicted that up to 2% of human proteins generated by AlphaFold are knotted, with the most complex knot found having six crossings (Protein Sci. 32 e4631). The year before, Peter Virnau of the Johannes Gutenberg University Mainz discovered a protein knot with seven crossings in the AlphaFold2 dataset (Protein Sci. 31 e4380). This protein has never been observed experimentally, so it’s possible that even more complex knots are out there.

Knots don’t crop up only in biology; knotted topologies have also been found to influence the thermodynamic and material properties of ice and hydrogels; meaning that in the future, we may use topology to design new materials. We need powerful methods to identify the structural fingerprints of knots in molecules and materials and we hope that our findings will inform this search. Knotting really does matter.

In 2004 three researchers in Canada used their university’s computing cluster to extend the table of knots, first compiled by Tait, up to 19 crossings, identifying more than six billion unique structures (Journal of Knot Theory and Its Ramifications 13 57). Having taken 25 years to create his list, Tait would probably have been shocked to learn that a century later, a machine would be able to extend his work by more than five orders of magnitude, in just a few days.

The biggest outstanding challenge in knot theory remains the search for the elusive complete invariant. Now that we are enabled by AI, the next step forward might take us equally by surprise.

The post Spot the knot: using AI to untangle the topology of molecules appeared first on Physics World.

]]>
Feature Solving a centuries-old mathematical puzzle could hold the key to understanding the function of many of the molecules of life https://physicsworld.com/wp-content/uploads/2024/08/24-08-Michieletto-knot-COLOUR-shutterstock_527071531.png newsletter
CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status https://physicsworld.com/a/cern-at-70-how-the-higgs-hunt-elevated-particle-physics-to-hollywood-status/ Tue, 13 Aug 2024 13:00:00 +0000 https://physicsworld.com/?p=115949 Peering behind the comms curtain at the world's most famous particle physics lab

The post CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status appeared first on Physics World.

]]>
When former physicist James Gillies sat down for dinner in 2009 with actors Tom Hanks and Ayelet Zurer, joined by legendary director Ron Howard, he could scarcely believe the turn of events. Gillies was the head of communications at CERN, and the Hollywood trio were in town for the launch of Angels & Demons – the blockbuster film partly set at CERN with antimatter central to its plot, based on the Dan Brown novel.

With CERN turning 70 this year, Gillies joins the Physics World Stories podcast to reflect on how his team handled unprecedented global interest in the Large Hadron Collider (LHC) and the hunt for the Higgs boson. Alongside the highs, the CERN comms team also had to deal with the lows. Not least, the electrical fault that put the LHC out of action for 18 months shortly after its switch-on. Or figuring out a way to engage with the conspiracy theory that particle collisions in the LHC would somehow destroy the Earth.

Spoiler alert: the planet survived. And the Higgs boson discovery was announced in that famous 2012 seminar, which saw tears drop from the eyes of Peter Higgs – the British theorist who had predicted the particle in 1964. Our other guest on the podcast, Achintya Rao, describes how excitement among CERN scientists became increasingly palpable in the days leading to the announcement. Rao was working in the comms team within CMS, one of the two LHC detectors searching independently for the Higgs.

Could particle physics ever capture the public imagination in the same way again?

Discover more by reading the feature “Angels & Demons, Tom Hanks and Peter Higgs: how CERN sold its story to the world” by James Gillies.

The post CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status appeared first on Physics World.

]]>
Peering behind the comms curtain at the world's most famous particle physics lab Peering behind the comms curtain at the world's most famous particle physics lab Physics World CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status full false 59:27 Podcasts Peering behind the comms curtain at the world's most famous particle physics lab https://physicsworld.com/wp-content/uploads/2024/08/Angels-and-Demons_Home-scaled.jpg newsletter
Photonic orbitals shape up https://physicsworld.com/a/photonic-orbitals-shape-up/ Tue, 13 Aug 2024 09:42:30 +0000 https://physicsworld.com/?p=115988 The behaviour of photons confined inside three-dimensional cavity superlattices is much more complex than that of electrons in conventional solid-state materials

The post Photonic orbitals shape up appeared first on Physics World.

]]>
Photons in arrays of nanometre-sized structures exhibit more complex behaviour than electrons in conventional solid-state materials. Though the two systems are sometimes treated as analogous, scientists at the University of Twente in the Netherlands discovered variations in the shape of the photons’ orbitals. These variations, they say, could be exploited when designing advanced optical devices for quantum circuits and nanosensors.

In solid-state materials, electrons are largely confined to regions of space around atomic nuclei known as orbitals. Additional electrons stack up in these orbitals in bands of increasing energy, and the scientists expected to find similar behaviour in photons.  “It has been known for some time that photonic materials are similar to standard electronic matter in many ways and can be described using energy bands and orbitals, too,” says Marek Kozon, a theorist and mathematician who participated in the study as part of his PhD in the Complex Photonic Systems (COPS) lab at Twente.

“Similar” does not mean “same”, however. “We have now discovered that orbitals in which photons are confined are significantly more varied in shape than electronic orbitals,” Kozon says. This is important, he says, because the shape of electronic orbitals influences materials’ chemical properties – something that is apparent in the Periodic Table of the Elements, which groups elements with similar orbital structures together. Additional variations in the shape of photonic orbitals could also create properties not achievable in electronic materials.

Boring electrons, exciting photons

The comparatively “boring” behaviour of electrons stems from the fact that they always orbit the nucleus in regions with sphere-like shapes, explains Kozon, who is now at the single-photon detector company Pixel Photonics in Germany. Photonic materials, in contrast, can be designed with much more freedom.

In the latest work, the Twente researchers used numerical computations to study how photons behave when they are confined in a three-dimensional nanostructure known as an inverse woodpile superlattice. This superlattice is a photonic crystal that contains periodic defects with a radius that differs from that of the pores in the underlying structure. The researchers adopted this design for two reasons, Kozon explains. The first is that photonic states inside the defects are insulated from their environment, making them easier to study. The second is that 3D inverse woodpile superlattices are relevant to experiments being carried out by colleagues in the COPS lab.

The team’s original motivation, Kozon continues, was to better understand how light is confined in these structures. “The study turned out be significantly more complicated than we expected,” he says. “We produced several terabytes of data and developed new analysis methods, including scaling and machine learning, to evaluate the sheer amount the information we had gathered. We then investigated in more detail the superlattice parameters that the analysis flagged up as the most interesting.”

Applying the scaling techniques, for example, created an unexpected issue. While scaling theories usually work well for very large systems, which in this case would mean very large periodicities (or lattice constant), Kozon notes that “our system is precisely the opposite because it has a small periodicity. We were thus not able to calculate how light behaves in it.”

Optimally confining light

The team solved this problem by developing a unique clustering method that uses unsupervised machine learning to analyse the data. Thanks to these analyses, the researchers now know which types of structures can optimally confine light in an inverse woodpile superlattice. Conversely, they can identify any deviations from these ideal structures by comparing experimental observations with their – now vast – database.

And that is not all: the team also analysed where energy is concentrated in the photonic crystal, making it possible to determine which parameters allow the greatest concentration of energy in a small volume of the structure. “This is extremely important for so-called cavity-quantum-electrodynamics (QED) applications in which we force light to interact with matter and, for example, to control the emission of light sources or even create exotic states of mixed light and matter,” Kozon tells Physics World. “This finding could help advance applications in efficient lighting, quantum computing or sensitive photonic sensors.”

The Twente researchers are now fabricating real 3D superlattices thanks to the knowledge they have gained. They report their present work in Physical Review B.

The post Photonic orbitals shape up appeared first on Physics World.

]]>
Research update The behaviour of photons confined inside three-dimensional cavity superlattices is much more complex than that of electrons in conventional solid-state materials https://physicsworld.com/wp-content/uploads/2024/08/vxdanmmuuevytjjkvo6guw.jpg
Liquid water could abound in Martian crust, seismic study suggests https://physicsworld.com/a/liquid-water-could-abound-in-martian-crust-seismic-study-suggests/ Mon, 12 Aug 2024 19:00:52 +0000 https://physicsworld.com/?p=115986 Reservoir could harbour microbial life

The post Liquid water could abound in Martian crust, seismic study suggests appeared first on Physics World.

]]>
An ocean’s worth of liquid water could be trapped within the cracks of fractured igneous rocks deep within the Martian crust – according to a trio of researchers in the US. They have analysed seismic data gathered by NASA’s InSight Lander and their results could explain the fate of some of the liquid water that is believed to have existed on the Martian surface in the distant past.

Mars’ surface carries many traces of its watery past including remnants of river channels, deltas, and lake deposits. As a result, scientists are confident that lakes, rivers, and oceans of liquid water were once common on the Red Planet in the distant past.

Evidence also suggests that about 3–4 billion years ago, Mars’ atmosphere was gradually lost to space, and its surface dried up. While some water remains locked away in Martian ice caps, most of it would have either escaped into space with the rest of the atmosphere, or filtered down into porous rocks in the crust, where it could remain to this day. So far, scientists are uncertain as to how much of this water is held within the crust, and how deeply it could be sequestered.

Seismic insight

This latest research was done by Michael Manga at the University of California Berkeley along with Vashan Wright and Matthias Morzfeld at the University of California San Diego. The trio searched for buried water by analysing data collected by the InSight Lander, which probed the Martian interior in 2018–2022. To gather information about the planet’s crust, InSight’s SEIS instrument detected the seismic waves reverberate throughout the planet, originating from sources including Marsquakes and meteor impacts.

As they travel through the Martian interior, these waves change speed and direction at boundaries between different materials in the crust. This means that when measured by SEIS, seismic waves originating from the same source can be detected at different times, depending on the paths they took to reach the probe.

“The speed at which seismic waves travel through rocks of different densities depend on their composition, pore space, and what fills the pore space – either gas, water, or ice,” Manga explains. By analysing the differing arrival times of seismic waves reaching the probe from the same sources, researchers can gather useful information about the composition of the planet’s interior.

To interpret InSight’s seismic data, Manga and colleagues combined its measurements with the latest rock physics models and probabilistic analysis. They were able to identify the combinations of rock composition, water saturation, porosity, and pore shape within the Martian crust that could best explain InSight’s measurements.

Large reservoir

“We identified a large reservoir of liquid water,” Manga describes. “The observations on Mars are best explained by having cracks in the mid-crust that are filled with liquid water.”

The researchers reckon that this reservoir is sequestered between about 11.5–20 km beneath the surface and contains enough water to cover the Martian surface in a liquid ocean between 1–2 km deep. This section of the crust is believed to comprise fractured igneous rock, formed through the cooling and solidification of magma.

The team hopes that their results could provide fresh insights into the fate of the liquid water that once dominated Mars’ surface. “Understanding the water cycle and how much water is present is critical for understanding the evolution of Mars’ climate, surface, and interior,” Manga says.

The team’s discoveries could help identify potentially habitable environments hidden deep within the Martian crust where microbial communities could thrive today, or in the past.

“On Earth, we see life deep underground,” Manga explains. “This does not necessarily mean there is also life on Mars, but at least there are environments that could possibly be habitable.”

The research is described in PNAS.

The post Liquid water could abound in Martian crust, seismic study suggests appeared first on Physics World.

]]>
Research update Reservoir could harbour microbial life https://physicsworld.com/wp-content/uploads/2024/08/12-8-24-InSight-Lander.jpg newsletter1
Had a leak from your science facility? Here’s how to deal with the problem https://physicsworld.com/a/had-a-leak-from-your-science-facility-heres-how-to-deal-with-the-problem/ Mon, 12 Aug 2024 10:32:45 +0000 https://physicsworld.com/?p=115794 Robert P Crease explains how Fermilab navigated an accidental leak of tritium

The post Had a leak from your science facility? Here’s how to deal with the problem appeared first on Physics World.

]]>
Small leaks of radioactive material can be the death knell for large scientific facilities. It’s happened twice already. Following releases of non-hazardous amounts of tritium, the Brookhaven National Laboratory (BNL) was forced to shut its High Flux Beam Reactor (HFBR) in 1997, while the Lawrence Berkeley National Laboratory (LBNL) had to close its National Tritium Labeling Facility in 2001.

Fortunately, things don’t always turn out badly. Consider the Fermi National Accelerator Laboratory (Fermilab) near Chicago, which has for many decades been America’s premier high-energy physics research facility. In 2005, an experiment there also leaked tritium, but the way the lab handled the situation meant that nothing had to close. Thanks to a grant from the National Science Foundation, I’ve been trying to find out why such successes happen.

Running on grace

Fermilab, which opened in 1971, has had a hugely successful history. But its relationship with the local community got off to a shaky start. In 1967, to acquire land for the lab, the State of Illinois used a US legal manoeuvre called “eminent domain” to displace homeowners, angering neighbours. More trouble came in 1988, when the US Department of Energy (DOE) considered Fermilab as a possible site for the 87 km circumference Superconducting Supercollider (SSC), which would require acquiring more land.

Some locals formed a protest group called CATCH (Citizens Against The Collider Here). It was an aggressive organization whose members accused Illinois officials of being “secretive, arrogant, and insensitive”, and of wanting to saddle the area with radiation, traffic and lower property values. While Illinois officials were making the bid to host the SSC, the lab was the focus of protests. The controversy ended when the DOE chose to site the machine in Waxahachie, Texas. (The SSC was cancelled in 1993, incomplete.)

Aware of the local anger, Fermilab decided to revamp its public relations. In 1989, it replaced its Office of Public Information with a “Department of Public Affairs” reporting to the lab director. Judy Jackson, who became the department’s head, sought professional consultants, and organized a diverse group of  community members with different backgrounds, including a CATCH founder, to examine Fermilab’s community engagement practices.

Brookhaven’s closure of the HFBR in 1997 was a wake-up call for US labs, including Fermilab itself. Aware that the reactor had been shut by a cocktail of politics, activism and media scare stories, the DOE organized a “Lessons learned” conference in Gaithersburg, Maryland, a year later. When Jackson came to the podium her first slide read simply: “Brookhaven’s experience: There but for the grace of God…”

Then, in 2005, Fermilab discovered that one of its own experiments leaked tritium.

Tritium tale

All accelerators produce tritium in particle collisions at target areas or beam dumps. Much dissipates in air, though some replaces ordinary hydrogen atoms to make tritiated water, which is hard to control. Geographically, Fermilab is fortunate, being located over almost impermeable clay. Compacted and thick, the clay’s a nuisance for gardeners and construction crews but a godsend to Fermilab, for bathtub-like structures built in it easily contain the tritium.

The target area of one experimental site – Neutrinos at the Main Injector (NuMI) – was dug in bedrock beneath the clay. Then, during routine environmental monitoring in November 2005, Fermilab staff found a (barely) measurable amount of tritium in a creek that flowed offsite. Tritium from NuMI was mixing with unexpectedly high amounts of water vapour seeping through the bedrock, creating tritiated water that went into a sump. This was being pumped out and making its way into surface water.

The idea was that employees, neighbours, the media, local officials and groups would all be informed simultaneously, so that everybody would hear the news first from Fermilab rather than other sources.

Jackson’s department drew up a plan that would see letters delivered by hand to community members from lab director Pier Oddone, who would also pen an article in the Friday 9 December edition of the daily online newspaper Fermilab Today. The idea was that employees, neighbours, the media, local officials and groups would all be informed simultaneously, so that everybody would first hear the news from Fermilab rather than other sources.

Disaster struck when a sudden snowstorm threatened to delay the letters from reaching recipients. But the lab sent staff out anyway, knowing that local residents simply had to hear of the plan before that issue of Fermilab Today. When published, it appeared as normal, with a story about a “Toys for Tots” Christmas collection, a list of lab events and the cafeteria menu (including roasted-veggie panini).

Oddone’s “Director’s corner” column was in its usual spot on the right, but attentive readers would have noticed that it had appeared a few days early (it normally came out on a Tuesday). As well as mentioning the letter that had been hand-delivered to the community, Oddone said that there had been “a small tritium release” as a result of “normal accelerator operations”, but that it was “well below federal drinking water standards”.

His column provided a link to a webpage for more information and Jackson’s phone number in her department. That web page also listed Jackson’s office phone number, and said it would link to any subsequent media coverage of the episode. Oddone’s message seemed to be appropriate publicity about a finding that was not a health or environment hazard; it was a communication essentially saying: “Here’s something that’s happening at Fermilab.”

Fermilab family fair

For years Jackson marvelled at how smoothly everything turned out. Politicians were supportive, the media fair and community members were largely appreciative of the extent to which Fermilab had gone to keep them informed. “Don’t try this at home,” she’d tell people, meaning don’t try to muddle through without having a plan drawn up with the help of a consultant. “If you do it wrong, it’s worse than not doing it at all.”

The critical point

Fermilab’s successful navigation of the unexpected tritium emission cannot be traced to any one factor. But two lessons stand out from the 10 or so other episodes I’ve found around that time when major research instruments leaked tritium. One is the importance of having a strong community group that wasn’t just a token effort but a serious exercise that involved local activists. The group discouraged activist sharpshooting and political posturing, thereby allowing genuine dialogue about issues of concern.

A second lesson is what I call “quantum of response”, by which I mean that the size of one’s response must be appropriate to the threat rather than over- or underplaying it. Back in the late 1990s, the DOE had responded to the Brookhaven leak with dramatic measures – press conferences were held, statements issued and, incredibly, the lab’s contractor was fired. Instead of reassuring community members, those actions terrified many.

It’s insane to fire a contractor that had been successful for half a century because of something that posed no threat to health or the environment. All it did was suggest that something far worse was happening that the DOE wasn’t talking about. One Brookhaven activist called the leak a “canary” presaging the lab’s admission of more environmental catastrophes.

The Fermilab lesson is two decades old now. The onset of social media since then makes it easy to form and consolidate terrified people by promoting and amplifying inflammatory messages, which will be harder to address.  Moreover, tritium leaks are only one kind of episode that can spark community concerns at research laboratories.

Sometimes accelerator beams have gone awry, or experimental stations have malfunctioned in a way that releases radiation. Activists have accused accelerators at Brookhaven and CERN of possibly creating strangelets or black holes that might destroy the world. Fermilab’s current woes stemming from its recent Performance Evaluation and Measurement Plan may raise yet another set of community relations issues.

Whatever the calamity, a lab’s response should not be improvised but based on a carefully worked-out plan. In the 21st century, “God’s grace” may be a weak force. Studying previous episodes, and seeking lessons to be learned from them, is a stronger one.

The post Had a leak from your science facility? Here’s how to deal with the problem appeared first on Physics World.

]]>
Opinion and reviews Robert P Crease explains how Fermilab navigated an accidental leak of tritium https://physicsworld.com/wp-content/uploads/2024/08/2024-08-CP-Damage-control-iStock-1772511520_Paper-Trident.jpg newsletter
Our world (still) cannot be anything but quantum, say physicists https://physicsworld.com/a/our-world-still-cannot-be-anything-but-quantum-say-physicists/ Mon, 12 Aug 2024 08:26:12 +0000 https://physicsworld.com/?p=115981 Measurements of the Leggett-Garg inequality using neutron interferometry emphasize that no classical macroscopic theory can describe reality

The post Our world (still) cannot be anything but quantum, say physicists appeared first on Physics World.

]]>
Is the behaviour of quantum objects described by a simple, classical theory? Or can particles really be in a superposition of different places at once, as quantum theory suggests? In 1985, the physicists Antony James Leggett and Anupam Garg proposed a new way of answering these questions. If the world can be described by a theory that doesn’t feature superposition and other quantum phenomena, Leggett and Garg showed that a certain inequality must be obeyed. If the world really is quantum, though, the inequality will be violated.

Researchers at TU Wien in Austria have now made a new measurement of this so-called Leggett-Garg inequality (LGI) using neutron interferometry. Their verdict is clear: no classical macroscopic theory can truly describe reality. The work also provides further proof that a particle can be in a superposition of two states associated with different locations – even when these locations are centimetres apart.

Correlation strengths

The LGI is conceptually similar to the better-known Bell’s inequality, which describes how the behaviour of one object relates to that of another object with which it is entangled. The LGI, however, describes how the state of a single object varies at different points in time.

Leggett and Garg assumed that the object in question can be measured at different moments. Each of these measurements must yield one of two possible results. It is then possible to perform a statistical analysis of how strongly the results at the different moments correlate with each other, even without knowing how the object’s actual state changes over time.

If the theory of classical realism holds, Leggett and Garg showed that the degree of these correlations cannot exceed a certain level. Specifically, for a set of three measurements, the quantity KC21 + C32C31 (where C is a correlation function, and the indices denote the different measurements) must be less than 1. If, on the other hand, the object obeys the rules of quantum theory, K will be greater than 1.

Enter neutron beams

Previous experiments have already demonstrated LGI violations in several quantum systems, including photonic qubits, nuclear spins in diamond defect centres, superconducting qubits and impurities in silicon. Still, team member Hartmut Lemmel says the new measurement offers certain advantages.

“Neutron beams, as we use them in a neutron interferometer, are perfect,” says Lemmel, who oversees the S18 instrument at the Institut Laue-Langevin (ILL) in Grenoble, France, where the experiment was carried out. A neutron interferometer, he explains, is a silicon-based crystal interferometer in which an incident neutron beam is split into two partial beams at a crystal plate and then recombined by another piece of silicon. This configuration means there are three distinct regions in which the neutrons’ locations can be measured: in front, inside and behind the interferometer.

“The actual measurement of the two-level system’s state probes the presence of the neutron in two particular regions of the interferometer, which is usually referred to as a ‘which-way’ measurement,” explains team member Stephan Sponar, a postdoctoral researcher at TU Wien. “So as not to disturb the time evolution of the system, our measurement probes the absence rather than the presence of the neutron in the interferometer. This is called an ideal negative measurement.”

The fact that the two partial beams are several centimetres apart is also beneficial, adds Niels Geerits, a PhD student in the team. “In a sense, we are dealing with a quantum object that is huge by quantum standards,” he says.

Leggett-Garg inequality is violated

After combining several neutron measurements, the TU Wien team showed that the LGI is indeed violated, with the final measured value of the Leggett–Garg correlator K equal to 1.120 ± 0.026.

“Our obtained result cannot be explained within the framework of macro-realistic theories, only by quantum theory,” Sponar tells Physics World. One consequence, Sponar continues, is that the idea that “maybe the neutron is only travelling on one of the two paths, we just don’t know which one” cannot be true. There is, he says, “no time inside the interferometer [when] the system (neutron) is in a ‘given state’, that is, either in path 1 or in path 2”.

Instead, he concludes, the neutron must be in a coherent superposition of system states – a fundamental property of quantum mechanics.

The experiment is detailed in Physical Review Letters.

The post Our world (still) cannot be anything but quantum, say physicists appeared first on Physics World.

]]>
Research update Measurements of the Leggett-Garg inequality using neutron interferometry emphasize that no classical macroscopic theory can describe reality https://physicsworld.com/wp-content/uploads/2024/08/neutrons-on-classicall.jpg newsletter1
CERN’s Science Gateway picked by Time magazine as one of the ‘world’s greatest places’ to visit https://physicsworld.com/a/cerns-science-gateway-picked-by-time-magazine-as-one-of-the-worlds-greatest-places-to-visit/ Sat, 10 Aug 2024 09:00:42 +0000 https://physicsworld.com/?p=115969 The gateway ‘bridges the gap between the general public and the people in lab coats’

The post CERN’s Science Gateway picked by <em>Time</em> magazine as one of the ‘world’s greatest places’ to visit appeared first on Physics World.

]]>
As well as a high-end hotel on the Amalfi coast, a wildlife lodge in Guyana, and a “bamboo sanctuary” in Indonesia, Time magazine’s slightly pretentious list of the “world’s greatest places” for 2024 includes one destination physicists might actually want to visit.

We’re talking about CERN’s “Science Gateway”, an outreach centre designed by the “master of hi-tech architecture” Renzo Piano, which features a transparent skywalk between two raised tubular buildings.

Time calls the gateway a “family-friendly, admission-free offshoot” of CERN that “bridges the gap between the general public and the people in lab coats”.

The Science Gateway took some three years to build and opened in October 2023. It includes exhibitions, labs, a 900-seat auditorium as well as a shop and a Big Bang café. Aimed at those aged five and above, the centre is expected to welcome half a million visitors each year.

To compile the list, Time selected from nominations made via an application process as well as suggestions from its international network of correspondents and contributors.

Other destinations on the list include Antarctica’s White DesertMaui Cultural Lands in Hawaii, and Kamba in Republic of the Congo.

So, what are you waiting for? Book that trip to Geneva.

The post CERN’s Science Gateway picked by <em>Time</em> magazine as one of the ‘world’s greatest places’ to visit appeared first on Physics World.

]]>
Blog The gateway ‘bridges the gap between the general public and the people in lab coats’ https://physicsworld.com/wp-content/uploads/2024/08/030A9659.jpeg
Pumping on a half-pipe: physicists model a skateboarding skill https://physicsworld.com/a/pumping-on-a-half-pipe-physicists-model-a-skateboarding-skill/ Fri, 09 Aug 2024 17:02:49 +0000 https://physicsworld.com/?p=115977 Variable pendulum describes how energy is pumped into the system

The post Pumping on a half-pipe: physicists model a skateboarding skill appeared first on Physics World.

]]>
If you have been watching skateboarding at the Olympics, you may be wondering how the skaters manage to keep going up and down ramps long after friction should have consumed their initial gravitational potential energy.

That process is called pumping, and most skaters will learn how to do it by going back and forth on a half-pipe. If you are not familiar with the lingo, a half-pipe comprises two ramps that are connected by a lower (sometimes flat) middle section. A good skateboarder can skate up the side of a ramp, turn around and do the same on the other side – and continue to oscillate back and forth in the half-pipe.

What’s obvious about the physics of this scenario is that the gravitational potential energy of the skater while at the top of the half-pipe will be quickly lost to friction. So how does a skater keep going? How do they pump kinetic energy into the system?

Variable pendulum

It turns out that the process is similar to an obscure way that you can keep a playground swing going – by standing on the seat and shifting your centre of mass by squatting down in the centre of a swing and rising up at both ends of a swing (see video below). This can be understood in terms of a pendulum with a length that varies in a regular way – and that is how Florian Kogelbauer at ETH Zurich and colleagues in Japan have modelled pumping in a skateboard half-pipe.

Their model considers how a skilled skater modulates their centre of mass relative to the surface of the half-pipe. Essentially this involves crouching down as the skateboard travels across the flat bit of the halfpipe, then pushing up from the board during the curved ascent of the ramp. Pushing up reduces the moment of inertia of the system, and conservation of angular momentum dictates that the skater must speed up.

The team compared their model to video data of experienced and inexperienced skaters pumping a half-pipe. They found that experienced skaters did indeed adhere to their model of pumping. They now plan to extend their model to include other movements done by skaters during pumping. They also say that their model could be used to better understand the physics of other sports such as ski jumping.

The research is described in Physical Review Research.

And if you are interested in the physics of the playground swing, check out the video below.

 

The post Pumping on a half-pipe: physicists model a skateboarding skill appeared first on Physics World.

]]>
Blog Variable pendulum describes how energy is pumped into the system https://physicsworld.com/wp-content/uploads/2024/08/9-8-24-skateboard-pumping.jpg
Peering inside the biological nano-universe: Barbora Špačková on unveiling individual molecules moving in real time https://physicsworld.com/a/peering-inside-the-biological-nano-universe-barbora-spackova-on-unveiling-individual-molecules-moving-in-real-time/ Fri, 09 Aug 2024 13:30:55 +0000 https://physicsworld.com/?p=115768 Barbora Špačková on moving from theoretical to experimental physics and the joy of refining her technology for real-world applications

The post Peering inside the biological nano-universe: Barbora Špačková on unveiling individual molecules moving in real time appeared first on Physics World.

]]>
On 10 April 2019, physicist Barbora Špačková was peering through an optical microscope in her lab at Chalmers University of Technology in Sweden when she saw a short DNA segment – a biological object so small it was thought nearly unimaginable for conventional microscopy to reveal it. She remembers the exact date, because, in a poetic coincidence, it was the day that the first image of a black hole was released.

While everyone on campus was talking about the iconic astronomical image, Špačková was in the lab witnessing her first successful experiment on the path to a new microscopy technique that today enables other scientists to see molecules just a nanometre in size. “That was a perfect day,” she recalls. “Science is often about 95% failure, troubleshooting and wondering why experiments don’t go as planned. So having that moment of success was beautiful.”

But her path to that moment had not been linear. As an undergraduate studying physics at the Czech Technical University in Prague, Špačková took a two-year break from her studies, during which she worked as a freelance graphic designer. “It was a period when I was not exactly sure what to do with my life. But one night, I woke up with a clear realization that my heart was in science,” she says. “Coming back to the university, I felt more determined than ever.”

After defending her Master’s thesis in physical engineering, which focused on theory and simulations, Špačková felt drawn towards experimental work – particularly technologies with applications in the life sciences. So she started a PhD studying plasmonic biosensors, which use metal nanostructures to detect biological molecules. These sensors exploit surface plasmon resonance, in which electrons on a metal surface oscillate in response to light. When a biological molecule binds to the sensor’s surface, it causes a measurable shift in the resonant frequency, thus signalling the molecule’s presence.

Špačková’s PhD thesis on detecting extremely low concentrations of molecules won her the 2015 Werner von Siemens Award for Excellence. One of the concepts she’d worked on led to her developing a working prototype of a sensor able to detect cancer biomarkers. “I was really happy that I went with my dream,” says Špačková, adding that she “moved from the realm of theory to hands-on experimentation and eventually built a box with a functional button designed to serve a greater good.”

A serendipitous discovery

After her PhD, Špačková wanted to broaden her perspective by working abroad. Fortunately, she found a postdoc matching her interests in the group led by Christoph Langhammer at Chalmers so she and her young family moved there.

The project focused on nano-plasmonic biosensing again, but in a novel configuration, by combining it with nanofluidics. This involves studying fluids confined in nanoscale structures – a technology Špačková had not worked with before. “I had this playground of new toys that I had never seen in my life,” she says. “It was exciting. I learned a lot. I was experiencing a new part of the universe.”

Early on in her project, a colleague showed her a strange optical effect he was seeing in his devices. Špačková decided to investigate, developing a theory of how biomolecules inside nanofluids interact with light. To her surprise, her calculations suggested it should be possible to see even an individual biomolecule.

Into the nano-universe

These nanometre-sized objects had never been seen using traditional optical microscopy, but repeated calculations convinced Špačková she was onto something. With support from her supervisor and help from other team members, she equipped the lab with instruments needed to pursue the theory.

The trick was to put a biomolecule inside a nanochannel on a chip. Although biomolecules scatter too little light to be seen directly, interference with the light scattered by the nanochannel creates a much higher contrast. By subtracting an image of the empty channel from one with the biomolecule inside, the resulting interfered light reveals the presence of the molecule.

While other optical microscopy methods have enabled scientists to see single molecules before, they usually involve labelling these objects with fluorescent markers or fixing them to a surface – both of which can affect their properties and behaviours. The unique advantage of Špačková’s technique – named “nanofluidic scattering microscopy” (NSM) – is that it unveils single molecules in their natural state, moving freely in real time.

Looking at life

When Špačková succeeded in getting the microscope to work in 2019, it was not just a remarkable technological achievement; it was also an exciting step towards potential applications. Her invention could deepen biologists’ understanding of living processes, by showing how biomolecules move around and interact. It could also accelerate drug development, by illuminating how drug candidates interact with cell components.

Recognizing these possibilities, Špačková and her colleagues founded a start-up called Envue Technologies, with support from the Chalmers Ventures incubator, to develop and commercialize NSM instruments. Although Špačková returned to Czechia in 2022, after receiving a grant from the Czech Science Foundation and Marie-Curie Fellowship, she remains a scientific adviser to the company. “This is a super exciting experience for academics,” she says. “You get in touch with the real world and potential end users of your technology.”

Earlier this year, it was announced that Špačková will be setting up one of three new Dioscuri Centres of Scientific Excellence in the Czech Republic – an initiative of the Max Planck Society to support outstanding scientists establishing research groups.

In this programme, Špačková will be working in partnership with a German research group studying molecular transport in cells. The ultimate goal is to develop imaging tools based on NSM, for this application. “We would really like to dive inside this biological nano-universe and observe it in ways that were not possible before,” says Špačková.

It is also the first time that Špačková will be a team leader, and she is looking forward to embracing the challenge of this new role. Reflecting on her career so far, she emphasizes that everyone has a unique path and must tune in to what is right for them. “I think there’s a subtle art of deciding whether to give up or keep going,” she says. “You have to be very careful with the decision. In my case, following the heart worked.”

The post Peering inside the biological nano-universe: Barbora Špačková on unveiling individual molecules moving in real time appeared first on Physics World.

]]>
Feature Barbora Špačková on moving from theoretical to experimental physics and the joy of refining her technology for real-world applications https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Careers-Spackova-portrait.png newsletter
Kirigami cubes make a novel mechanical computer https://physicsworld.com/a/kirigami-cubes-make-a-novel-mechanical-computer/ Fri, 09 Aug 2024 08:39:42 +0000 https://physicsworld.com/?p=115963 New device can store, retrieve and erase data

The post Kirigami cubes make a novel mechanical computer appeared first on Physics World.

]]>
A new mechanical computer made from an array of rigid, interconnected plastic cubes can store, retrieve and erase data simply by stretching the array and manipulating the position of the cubes. The device’s construction is inspired by the ancient Japanese art of paper cutting, or kirigami, and its designers at North Carolina State University in the US say that more advanced versions could be used in stable, high-density memory and logic computing; in information encryption and decryption; and to create displays based on three-dimensional units called voxels.

Mechanical computers were first developed in the 19th century and do not contain any electronic components. Instead, they perform calculations with levers and gears. We don’t often hear about such contraptions these days, but researchers led by NC State mechanical and aerospace engineer Jie Yin are attempting to bring them back due to their stability and their capacity for storing complex information.

A periodic array of computing cubes

The NC State team’s computer comprises a periodic array, or metastructure, of 64 interconnected polymer cubes, each measuring 1 cm on a side and grouped into blocks. These cubes are connected by thin hinges of elastic tape that can be used to move the cubes either physically or by means of a magnetic plate attached to the cubes’ top surfaces. When the array is stretched in one direction, it enters a multi-stable state in which cubes can be pushed up or down, representing a binary 1 and 0 respectively. Thus, while the unit cells are interconnected, each cell acts as an independent switch with two possible states – in other words, as a “bit” familiar from electronic computing.

If the array is then compressed, the cubes lock in place, fixing them in the 0 or 1 state and allowing information to be stored in a stable way. To change these stored bits, the three-dimensional structure can be re-stretched, returning it to the multi-stable state in which each unit cell becomes editable.

Ancient inspiration

The new device was inspired by Yin and colleagues’ previous work, which saw them apply kirigami principles to shape-morphing matter. “We cut a thick plate of plastic into connected cubes,” Yin explains. “The cubes can be connected in multiple ways in a closed loop so that they can transform from 2D plates to versatile 3D voxelated structures.”

These transformations, he continues, were based on rigid rotations – that is, ones in which neither the cubes nor the hinges deform. “We were originally thinking of storing elastic energy in the hinges so that they could lead to different shape changes,” he says. “With this came the bistable unit cell idea.”

Yin says that one of the main challenges involved in turning the earlier shape-morphing system into a mechanical computer was to work out how to construct and connect the unit cells. “In our previous work, we made use of an ad-hoc design, but we could not directly extend this to this new work,” he tells Physics World. “We finally came up with the solution of using four cubes as a base unit and [assembling] them in a hierarchical way.”

While the platform has several possible applications, Yin says one of the most interesting would be in three-dimensional displays. “Each pop-up cube acts as a voxel with a certain volume and can independently be pushed up and remain stable,” he says. “These properties are useful for interactive displays or haptic devices for virtual reality.”

Computing beyond binary code

The current version of the device is still far from being a working mechanical computer, with many improvements needed to perform even simple mathematical operations. However, team member Yanbin Li, a postdoctoral researcher at NC State and first author of a Science Advances paper on the work, points out that the density of information it can store is relatively high. “Using a binary framework – where cubes are either up or down – a simple metastructure of nine functional units has more than 362 000 possible configurations,” he explains.

A further advantage is that a functional unit of 64 cubes can take on a wide variety of architectures, with up to five cubes stacked on top of each other. These novel configurations would allow for the development of computing that goes well beyond binary code, Li says.

In the nearer term, Li suggests that the cube array could allow users to create three-dimensional versions of mechanical encryption or decryption. “For example, a specific configuration of functional units could serve as a 3D password,” he says.

The post Kirigami cubes make a novel mechanical computer appeared first on Physics World.

]]>
Research update New device can store, retrieve and erase data https://physicsworld.com/wp-content/uploads/2024/08/09-08-2024-Cube-computer-scaled.jpg
Abdus Salam: celebrating a unifying force in global physics https://physicsworld.com/a/abdus-salam-celebrating-a-unifying-force-in-global-physics/ Thu, 08 Aug 2024 13:41:55 +0000 https://physicsworld.com/?p=115957 Our podcast guests are Claudia de Rham and Ian Walmsley at Imperial College

The post Abdus Salam: celebrating a unifying force in global physics appeared first on Physics World.

]]>
This podcast explores the extraordinary life of the Pakistani physicist Abdus Salam, who is celebrated for his ground-breaking theoretical work and for his championing of physics and physicists in developing countries.

In 1964, he founded the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste, Italy – which supports research excellence worldwide with a focus on physicists in the developing world. In 1979 Salam shared the Nobel Prize for Physics for his work on the unification of the weak and electromagnetic interactions.

Salam spent most of his career at Imperial College London and the university is gearing up to celebrate the centenary of his birth in January 2026. In this episode of the Physics World Weekly podcast, Imperial physicists Claudia de Rham and Ian Walmsley look back on the extraordinary life of Salam – who died in 1996. They also talk about the celebrations at Imperial College.

Image courtesy: AIP Emilio Segrè Visual Archives, Physics Today Collection

The post Abdus Salam: celebrating a unifying force in global physics appeared first on Physics World.

]]>
Podcasts Our podcast guests are Claudia de Rham and Ian Walmsley at Imperial College https://physicsworld.com/wp-content/uploads/2024/08/8-8-24-Abdus-Salam-list.jpg newsletter
Physicists detect nuclear decay in the recoil of a levitating sphere https://physicsworld.com/a/physicists-detect-nuclear-decay-in-the-recoil-of-a-levitating-sphere/ Wed, 07 Aug 2024 15:09:54 +0000 https://physicsworld.com/?p=115940 Principle of momentum conservation makes it possible to "see" individual alpha particles leaving a micron-scale silica bead

The post Physicists detect nuclear decay in the recoil of a levitating sphere appeared first on Physics World.

]]>
Physicists in the US have detected the nuclear decay of individual helium nuclei by embedding radioactive atoms in a micron-sized object and measuring the object’s recoil as a particle escapes from it. The technique, which is an entirely new way of studying particles emitted via nuclear decay, relies on the principle of momentum conservation. It might also be used to detect other neutral decay products, such as neutrinos and particles that could be related to dark matter and might escape detection by other means.

The conservation of momentum is a fundamental concept in physics, together with the conservation of energy and the conservation of mass. The principle is that as momentum (mass multiplied by velocity) can be neither created nor destroyed, the total amount of it must remain constant – as described by Newton’s laws of motion.

Outgoing decay product will exert a backreaction

Physicists led by David Moore of Yale University have now used this principle to determine when a radioactive atom emits a single helium nucleus (or alpha particle) as it decays. The idea is as follows: if the radioactive atom is embedded in a larger object, the outgoing decay product will exert a back-reaction on the object, making it recoil in the opposite direction. “This is similar to throwing a ball while on a skateboard,” explains team member Jiaxiang Wang. “After the ball has been thrown, the skateboard will slowly roll backward.”

The backreaction on a large object from just a single nucleus inside it would normally be too tiny to detect, but the Yale researchers managed to do it by precisely measuring the object’s motion using the light scattered from it. Such a technique can gauge forces as small as 10-20 N and accelerations as tiny as 10-7 g, where g is the local acceleration due to the Earth’s gravitational pull.

“Our technique allows us to determine when a radioactive decay has occurred and how large the momentum the decaying particle exerted on the object,” Wang says. “Momentum conservation ensures that the momentum carried by the object and the emitted alpha particle are the same. This means that measuring the object’s recoil provides us with information on the decay products.”

Optical tweezer technique

In their experiment, Moore, Wang and colleagues embedded several tens of radioactive lead-212 atoms in microspheres made of silica. They then levitated one microsphere at a time using the forces generated by a focused laser beam. This technique is known as an optical tweezer and is routinely employed to hold and move nano-sized objects. The researchers recorded recoil measurements over a period of two to three days as the lead-212 (which has a half-life of 10.6 hours) decayed to the stable isotope lead-208 through the emissions of alpha and beta particles (electrons).

According to Wang, the study is an important proof of principle, demonstrating conclusively that a single nuclear decay can be detected when it occurs in a much larger object. But the team also hopes to put it to good use for other applications. “We undertook this work as a first step towards directly measuring neutrinos as decay products,” Wang explains. “Neutrinos are central to many open questions in fundamental physics but are extremely difficult to detect. The technique we have developed could be a completely new way to study them.”

As well as detecting neutrinos, the new method, which is detailed in Physical Review Letters, could also be of interest for nuclear forensics. For example, it could be used to test whether dust particles captured from the environment contain potentially harmful radioactive isotopes.

The Yale researchers now plan to extend their technique to smaller silica spheres, which have better momentum sensitivity. “These smaller objects will allow us to sense the momentum kick from a single neutrino,” Wang tells Physics World. “Eventually, an approach like ours might also be applied to large arrays of spheres to sense other types of previously undetected, rare decays.”

The post Physicists detect nuclear decay in the recoil of a levitating sphere appeared first on Physics World.

]]>
Research update Principle of momentum conservation makes it possible to "see" individual alpha particles leaving a micron-scale silica bead https://physicsworld.com/wp-content/uploads/2024/08/Upload_IOP.jpg newsletter1
Radiation monitoring keeps track of nuclear waste contamination https://physicsworld.com/a/radiation-monitoring-keeps-track-of-nuclear-waste-contamination/ Wed, 07 Aug 2024 11:00:36 +0000 https://physicsworld.com/?p=115869 PhD studentship available to develop technologies for in situ characterization of nuclear fission products

The post Radiation monitoring keeps track of nuclear waste contamination appeared first on Physics World.

]]>
Nuclear reactors – whether operational or undergoing decommissioning – create radioactive waste. Management of this waste is a critical task and this practice has been optimized over the past few decades. Nevertheless, strategies for nuclear waste disposal employed back in the 1960s and 70s were far from ideal, and the consequences remain for today’s scientists and engineers to deal with.

In the UK, spent nuclear fuel is typically stored in ponds or water-filled silos. The water provides radiation shielding, as well as a source of cooling for the heat generated by this material. In England and Wales, the long-term disposal strategy involves ultimately transferring the waste to a deep geological disposal facility, while in Scotland, near-surface disposal is considered appropriate.

The problem, however, is that some of the legacy storage sites are many decades old and some are at risk of leaking. And when this radioactive waste leaks it can contaminate surrounding land and groundwater. The potential for radioactive contamination to get into the wet environment is an ongoing problem, particularly at legacy nuclear reactor sites.

“The strategy for waste storage 50 years ago was different to that used now. There wasn’t the same consideration for where this waste would be disposed of long term,” explains Malcolm Joyce, distinguished professor of nuclear engineering at Lancaster University. “A common assumption might have been ‘well it’s going to go in the ground at some point’ whereas actually, disposal is a necessarily rigorous, regulated and complicated programme.”

In one example, explains Joyce, radioactive waste was stored temporarily in drums and sited in near-surface spaces. “But the drums have corroded over time and they’ve started to deteriorate, putting containment at risk and requiring secondary containment protection,” he says. “Elsewhere, some of the larger ponds in which spent nuclear fuel was stored are also deteriorating and risking loss of containment.”

Problematic radioisotopes

The process of nuclear fission generates a range of radioactive products with a variety of half-lives and levels of radiotoxicity – a complex factor governed by their chemistry and radioactivity behaviours. One contaminant of particular concern is strontium-90 (Sr-90), a relatively high-yield fission product found in significant amounts in spent nuclear fuel and other radioactive waste.

Sr-90 emits relatively high-energy (0.6 MeV) beta radiation, has a relatively short half-life (about 30 years) and is water soluble, enabling it to migrate with groundwater. The major hazard, however, is its potential for uptake into biological systems. As a group 2 element similar to calcium, Sr-90 is a “bone seeker” that’s taken up by the bones and remains there, increasing the risk of leukaemia and bone cancer.

“The other challenge with strontium is that its daughter is even worse in radiotoxicity terms,” explains Joyce. Sr-90 decays into yttrium-90 (Y-90), which emits very high-energy beta radiation (2.2 MeV) that can penetrate up to 3.5 mm into aluminium. “The engineering challenge associated with Y-90 was first encountered at Three Mile Island, when they realised that the energy of the beta particles from it was sufficiently high to penetrate their personal protective equipment,” he notes.

Do not disturb

These potential biological hazards make it imperative to monitor potential radioactive contamination and address any leakages, and they also provide a basis for in situ monitoring of such leaks. One approach is to extract water or earth samples, often via boreholes, for offsite analysis in a laboratory. Unfortunately, what’s measured in the lab could be completely different to the radiological environment that you’re trying to understand. “This is an example that highlights the fact that trying to measure something actually changes the thing you’re trying to measure,” notes Joyce.

When undisturbed, Sr-90 and Y-90 reach secular equilibrium, a quiescent state in which Y-90 is produced at the same rate as its decay. Y-90 can tend to react with oxygen in the environment, dependent on pH, to form insoluble products such as yttrium oxide, known as yttria, and colloidal carbonate complexes that precipitate out of the surrounding water environment and can combine with calcium and silicon in the surrounding geology.

“There’s a steady-state radioactivity environment because it’s in secular equilibrium, and also a steady-state geochemistry environment associated with how much yttria is in suspension, settled out or stuck in the geology around it,” says Joyce. “But should it be disturbed by manual intervention this might lift plumes of material, redistributing the radioactivity in the area you’re working in. The risk associated with that is different to the risk assessments associated with the quiescent environment.”

Lancaster University Engineering

Joyce and his team are taking a different approach, by developing a method to monitor radioactive contamination in situ. The technique exploits the bremsstrahlung radiation generated when high-energy beta particles emitted by Sr-90 and Y-90 interact with their surrounding environment and slow down. And while beta particles only travel a few millimetres before they can no longer be detected, bremsstrahlung radiation comprises far more penetrating X-ray photons that can be measured at much greater distances.

The researchers are also using an astrophysical technique to determine the distribution of the measured radioactivity. The approach uses the Moffat point spread function – developed back in 1969 to find the distribution of galaxies – to analyse the depth and spread of the contamination and, importantly, how it is changing over time.

“If the depth of these radioactive features changes, that tells you whether things are getting worse or better,” Joyce explains. “Put simply, if they’re getting nearer to the surface, that’s probably not something that you want.”

The PhD project

The team has now demonstrated that bremsstrahlung measurements can discriminate the combined Sr-90/Y-90 beta emission from gamma radiation emitted by caesium-137 (another high-yield fission product) during in situ groundwater monitoring. The next goal is to distinguish emissions from the two beta emitters.

As such, Joyce has secured funding from the UK’s Nuclear Decommissioning Authority for a PhD studentship to develop methods to detect and quantify levels of Sr-90 and Y-90 in contaminated land and aqueous environments. The project, based at Lancaster University, also aims to understand the accuracy with which the two radioisotopes can be separated and investigate their respective kinetics.

The first task will be to determine whether bremsstrahlung emissions can discriminate between these two sources of radioactive contamination. Bremsstrahlung is produced in a continuous energy spectrum, with a maximum corresponding to the maximum energy of the beta particles (which also have a continuous energy distribution). Joyce points out that, while it is quite difficult to pinpoint this maximum, it could enable deconvolution of the contributions from Sr-90 and Y-90 to the bremsstrahlung spectrum.

It may also be possible to distinguish the two radioisotopes via direct detection of the beta particles, or a completely different solution may emerge. “Never say never with a PhD,” says Joyce. “There may be a better way of doing it that we’re not aware of yet.”

Joyce emphasizes the key role that such radiation monitoring techniques could play in nuclear decommissioning projects, such as the clean-up of the Dounreay shaft, for example. The 65-m deep shaft and silo were historically used to store radioactive waste from the Dounreay nuclear reactor in Scotland. This waste now needs to be retrieved, repackaged and stored somewhere isolated from people, animals and plants.

As the facility is emptied of radioactive material, the radiological environment will change. Ideally, it will become safer, and uncertainty reduced, with any changes potentially able to inform planning. “With this new technology we’ll be able to monitor radiation levels as the programme progresses, to understand exactly what’s happening in the environment as things are being cleaned up,” explains Joyce.

“The world would be a better place as a result of the ability to make these measurements, and they could inform how similar challenges are dealt with the world over,” Joyce tells Physics World. “If you asked me ‘why should somebody do this PhD?’, altruistically, it’s about taking us closer to the point where our grandchildren don’t have to worry about these things – that’s what’s important.”

Apply now

To find out more about the PhD studentship, which is fully funded for eligible UK students, contact Malcolm Joyce at m.joyce@lancaster.ac.uk. Candidates interested in applying should send a copy of their CV together with a personal statement or covering letter addressing their background and suitability for this project before the closing date of 31 August 2024.

The post Radiation monitoring keeps track of nuclear waste contamination appeared first on Physics World.

]]>
Analysis PhD studentship available to develop technologies for in situ characterization of nuclear fission products https://physicsworld.com/wp-content/uploads/2024/08/Dounreay-iStock_SteveAllenPhoto.jpg newsletter
Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution https://physicsworld.com/a/paradigm-shifts-positivism-realism-and-the-fight-against-apathy-in-the-quantum-revolution/ Wed, 07 Aug 2024 10:00:31 +0000 https://physicsworld.com/?p=115542 Jim Baggott reviews Escape from Shadow Physics: the Quest to End the Dark Ages of Quantum Theory by Adam Forrest Kay

The post Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution appeared first on Physics World.

]]>
Science can be a messy business. Scientists caught in the storm of a scientific revolution will try to react with calm logic and reasoning. But in a revolution the stakes are high, the atmosphere charged. Cherished concepts are abandoned as troubling new notions are cautiously embraced. And, as the paradigm shifts, the practice of science is overlaid with passionate advocacy and open hostility in near-equal measure. So it was – and, to a large extent, still is – with the quantum revolution.

Niels Bohr insisted that quantum theory is the result of efforts to describe a fundamentally statistical quantum world using concepts stolen from classical physics, which must therefore be interpreted “symbolically”. The calculation of probabilities, with no reference to any underlying causal mechanism that might explain how they arise, is the best we can hope for.

In the heat of the quantum revolution, Bohr’s “Copenhagen interpretation” was accused of positivism, the philosophy that valid knowledge of the physical world is derived only from direct experience. Albert Einstein famously disagreed, taking the time to explore alternatives more in keeping with a realist metaphysics, with a “trust in the rational character of reality and in its being accessible, to some extent, to human reason”, that had served science for centuries. Lest there be any doubt, Adam Forrest Kay’s Escape from Shadow Physics: the Quest to End the Dark Ages of Quantum Theory demonstrates that the Bohr–Einstein debate remains unresolved, at least to anybody’s satisfaction, and continues to this day.

Escape from Shadow Physics is a singular addition to the popular literature on quantum interpretations. Kay holds PhDs in both literature and mathematics and is currently a mathematics postdoc at the Massachusetts Institute of Technology. He stands firmly in Einstein’s corner, and his plea for a return to a realist programme is liberally sprinkled with passionate advocacy and open hostility in near-equal measure. He writes with the zeal of a true quantum reactionary.

Like many others before him, in arguing his case Kay needs first to build a monstrous, positivist Goliath that can be slain with the slingshot of realist logic and reasoning. This means embracing some enduring historical myths. These run as follows. The Bohr–Einstein debate was a direct confrontation between the subjectivism of the positivist and the objectivism of the realist. Bohr won the debate by browbeating the stubborn, senile and increasingly isolated Einstein into submission. Acting like some fanatical priesthood, physicists of Bohr’s church – such as Wolfgang Pauli, Werner Heisenberg and Léon Rosenfeld – shouted down all dissent, establishing the Copenhagen interpretation as a dogmatic orthodoxy.

Historical scientific myths are not entirely wrong, and typically hold some grains of truth. Rivals to the Copenhagen view were indeed given short shrift by the “Copenhagen hegemony”. Pauli sought to dismantle Louis de Broglie’s “pilot wave” interpretation soon after it was presented in 1927. He went on to dismiss its rediscovery by David Bohm in 1952 as “shadow physics beer-idea wish dreams”, and “not even new nonsense”. Rosenfeld dismissed Hugh Everett III’s “many worlds” interpretation of 1957 as “hopelessly wrong ideas”.

But Kay is not content with the myth as it is familiarly told, and so seeks to deepen it. He confers on Bohr “the charisma of the hypnotist, the charisma of the cult leader”, adding that “the Copenhagen group was, in a very real sense, a personality cult, centred on the special and wise Bohr”. Prosecuting such a case requires a selective reading of science history, snatching quotations where they fit the narrative, ignoring others where they don’t. In fact, Bohr did not deny objective reality, or the reality of electrons and atoms. In interviews conducted shortly before his death in 1962, Bohr reaffirmed that his core principle of “complementarity” (of waves and particles, for example) was “the only possible objective description”. Heisenberg, in contrast, was much less cautious in his use of language and makes an easier target for anti-positivist ire.

It can be argued that the orthodoxy, such as it is, is not actually based on philosophical pre-commitments. The post-war Americanization of physics drove what were judged to be pointless philosophical questions about the meaning of quantum theory to the fringes. Aside from those few physicists and philosophers who continued to nag at the problem, the majority of physicists just got on with their calculations, completely unconcerned about what the theory was supposed to mean. They just didn’t care.

As Bohm explained: “Everybody plays lip service to Bohr, but nobody knows what he says. People then get brainwashed into saying Bohr is right, but when the time comes to do their physics, they are doing something different.” Many who might claim to follow Bohr’s “dogma” satisfy their physical intuitions by continuing to think like Einstein.

Anton Zeilinger, who shared the 2022 Nobel Prize for Physics for his work on quantum entanglement and quantum information science, confessed that even physicists working in this new field consider foundations to be a bit dodgy: “We don’t understand the reason why. Must be psychological reasons, something like that, something very deep.” Kay admits this much when he writes: “Yes, many people think the debate is over and Bohr won, but that is actually a social phenomenon.” In other words, the orthodoxy is not philosophical, it is sociological. It has very little to do with Bohr and the Copenhagen interpretation. In truth, Kay is fighting for attention against the apathy and indifference characteristic of an orthodox mainstream physics, or what Thomas Kuhn called “normal science”.

As to how a modern-day realist programme might be pursued, Kay treats us to some visually suggestive experiments in which oil droplets follow trajectories determined by wave disturbances on the surface of the oil bath on which they move. He argues that such “quantum hydrodynamic analogues” show us that the pilot-wave interpretation merits much more attention than it has so far received. But while these analogues are intuitively appealing, the so-called “quantization” involved is as familiarly classical as musical notes generated by string or wind instruments. And, although such analogues may conjure surprising trajectories and patterns, they cannot conjure Planck’s constant. Or quantum entanglement.

But the pilot-wave interpretation demands a hefty trade-off. It features precisely the non-local, “peculiar mechanism of action at a distance” of the kind that Einstein abhorred, and which discouraged his own exploration of pilot waves in 1927. In an attempt to rescue the possibility that reality may yet be local, Kay reaches for a loophole in John Bell’s famous theorem and inequality. Yet he overlooks the enormous volume and variety of experiments that have been performed since the early 1980s, including tests of an inequality devised by the Nobel-prize-winning theorist Anthony Leggett that explicitly close the loophole he seeks to exploit.

Escape from Shadow Physics is a curate’s egg. Those readers who would condemn Bohr and the Copenhagen interpretation, for whatever reasons of their own, will likely cheer it on. Those looking for balanced arguments more reasoned than diatribe will likely be disappointed. Despite an extensive bibliography, Kay commits some curious sins of omission. But, while the journey that Kay takes may be flawed, there is yet sympathy for his destination. The debate does remain unresolved. Faced with the mystery of entanglement and non-locality, Bohr’s philosophy offers no solace. Kay (quoting a popular textbook) asks that we consider future generations in possession of a more sophisticated theory, who wonder how we could have been so gullible.

  • 2024 Weidenfeld & Nicolson 496pp £25 hb

The post Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution appeared first on Physics World.

]]>
Opinion and reviews Jim Baggott reviews Escape from Shadow Physics: the Quest to End the Dark Ages of Quantum Theory by Adam Forrest Kay https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Baggott-quantum-entanglement-1185114379-iStock_Inkoly.jpg newsletter
First patients treated using minibeam radiation therapy https://physicsworld.com/a/first-patients-treated-using-minibeam-radiation-therapy/ Tue, 06 Aug 2024 12:00:49 +0000 https://physicsworld.com/?p=115854 Minibeam radiotherapy using a clinical orthovoltage unit successfully treats two patients

The post First patients treated using minibeam radiation therapy appeared first on Physics World.

]]>
Spatially fractionated radiotherapy is a novel cancer treatment that uses a pattern of alternating high-dose peaks and low-dose valleys to deliver a nonuniform dose distribution. Numerous preclinical investigations have demonstrated that by shrinking the peaks and valleys to submillimetre dimensions, the resulting microbeams confer extreme normal tissue tolerance, enabling delivery of extremely high peak doses and providing excellent tumour control.

The technique has not yet, however, been used to treat patients. Most preclinical studies employed synchrotron X-ray sources, which deliver microbeams at ultrahigh dose rates but are not widely accessible. Another obstacle is that these extremely narrow beams (100 µm or less) are highly sensitive to any motion during irradiation, which can blur the pattern of peak and valley doses.

Instead, a team at the Mayo Clinic in Rochester, Minnesota, is investigating the clinical potential of minibeam radiation therapy (MBRT), which employs slightly wider beams (500 µm or more) spaced by more than 1000 µm. Such minibeams still provide high normal tissue sparing and tumour control, but their larger size and spacing makes them less sensitive to motion. Importantly, minibeams can also be generated by conventional X-ray sources with lower dose rates.

Michael Grams and colleagues have now performed the first patient treatments using MBRT. Writing in the International Journal of Radiation Oncology, Biology, Physics, they describe the commissioning of a clinical radiotherapy system for MBRT and report on the first two patients treated.

Minibeam delivery

To perform MBRT, the researchers adapted the Xstrahl 300, a clinical orthovoltage unit with 180 kVp output. “Because minibeam radiotherapy uses very narrow beams of radiation spaced very closely together, it requires low-energy orthovoltage X-rays,” Grams explains. “Higher-energy X-rays from linear accelerators would scatter too much and blur the peaks and valleys together.”

The team used cones with diameters between 3 and 10 cm to define the field size and create homogeneous circular fields. This output was then split into minibeams using tungsten collimators with 0.5 mm wide slits spaced 1.1 mm apart.

Commissioning measurements showed that the percentage depth dose decreased gradually with depth, reaching 50% somewhere between 3.5 and 4 cm. Peak-to-valley ratios were highest at the surface and inversely related to cone size. Peak dose rates at 1 cm depth ranged from 110 to 120 cGy/min.

The low dose rate of the orthovoltage system means that treatment times can be quite long and patient motion may be an issue. To mitigate motion effects, the researchers created 3D printed collimator holders that conform to the patient’s anatomy. These holders are fixed to the patient, such that any motion causes the patient and collimator to move together, maintaining the spatial separation of the peak and valley doses.

“This treatment had never been delivered to a human before, so we had to figure out all of the necessary steps in order to do it safely and effectively,” says Grams. “The main challenge is patient motion, which we solved by attaching the collimator directly to the patient.”

First-in-human treatments

The team treated two patients with MBRT. The first had a large (14x14x11 cm) axillary tumour that was causing severe pain and restricted arm motion, prompting the decision to use MBRT to shrink the tumour and preserve normal tissue tolerance for future treatments. He was also most comfortable sitting up, a treatment position that’s only possible using the orthovoltage unit.

The second patient had a 7x6x3 cm ear tumour that completely blocked his external auditory canal, causing hearing loss, shooting pain and bleeding. He was unable to undergo surgery due to a fear of general anaesthesia and instead was recommended MBRT to urgently reduce pain and bleeding without compromising future therapies.

“These patients had very few treatment options that the attending physician felt would actually help mitigate their symptoms,” explains Grams. “Based on what we learned from our preclinical research, they were felt to be good candidates for MBRT.”

Both patients received two daily MBRT fractions with a peak dose of 1500 cGy at 1 cm depth, using the 10 cm cone for patient 1 and the 5 cm cone for patient 2. The radiation delivery time was 11.5 or 12 min per fraction, with the second fraction delivered after rotating the collimator by 90°.

Treatment response to minibeam radiotherapy

Prior to treatment, the collimator was attached to the patient and a small piece of Gafchromic film was placed directly on the tumour for in vivo dosimetry. For both patients, the films confirmed the pattern of peak and valley doses, with no evidence of dose blurring.

For patient 1, the measured peak and valley doses were 1900 and 230 cGy, respectively. The expected doses (based on commissioning measurements) were 2017 and 258 cGy, respectively. Patient 2 had measured peak and valley doses of 1800 and 180 cGy, compared with expected values of 1938 and 248 cGy.

Both patients exhibited positive clinical responses to MBRT. Six days after his second treatment, patient 1 reported resolution of pain and improved arm motion. Three weeks later, the tumour continued to shrink and his full range of motion was restored. Despite the 10 cm cone not fully encompassing the large tumour, a uniform decrease in volume was still observed.

After one treatment, patient 2 had much reduced fluid leakage, and six days later, his pain and bleeding had completely abated and his hearing improved. At 34 days after MBRT, he continued to be asymptomatic and the lesion had completely flattened. Pleased with the outcome, the patient was willing to reconsider the recommended standard-of-care resection.

“The next step is a formal phase 1 trial to determine the maximum tolerated dose of minibeam radiotherapy,” Grams tells Physics World. “We are also continuing our preclinical work aimed at combinations of MBRT and systemic therapies like immunotherapy and chemotherapy drugs.”

The post First patients treated using minibeam radiation therapy appeared first on Physics World.

]]>
Research update Minibeam radiotherapy using a clinical orthovoltage unit successfully treats two patients https://physicsworld.com/wp-content/uploads/2024/08/6-08-24-MBRT-fig2-new.jpg newsletter1
‘Event-responsive’ electron microscopy focuses on fragile samples https://physicsworld.com/a/event-responsive-electron-microscopy-focuses-on-fragile-samples/ Tue, 06 Aug 2024 08:14:35 +0000 https://physicsworld.com/?p=115884 Pharmaceutical and catalysis research could benefit from new technique

The post ‘Event-responsive’ electron microscopy focuses on fragile samples appeared first on Physics World.

]]>
A new scanning transmission electron microscope (STEM) technique that modulates the electron beam in response to the scattering rate allows images to be formed with the fewest electrons possible. The researchers hope their “event-responsive electron microscopy“ could be used on fragile samples that are easily damaged by electron beams. The team is now working to implement their imaging paradigm with other microscopy techniques.

First developed in the 1930s, transmission electron microscopes have been invaluable for exploring almost all branches of science at tiny scales. These instruments rely on the fact that electrons can have far shorter de Broglie wavelengths than optical photons and hence can observe much finer details. Visible light microscopes cannot normally resolve features smaller than about 200 nm, but electron imaging can often achieve resolutions well below 0.1 nm. However, the higher energy of these electrons makes them more damaging to samples than light. Researchers must therefore keep the number of electrons scattered from fragile sample to the absolute minimum needed to build up a clear image.

In a STEM, an image is created by rapidly scanning a focused beam of electrons across a sample in a grid of pixels. Most of these electrons pass straight through the sample, but a small percentage are scattered sharply by collisions. Detectors that surround the beam path record these scattering events. The electron scattering rate from a particular point tells microscopists the density around that point, and thereby allows them to reconstruct an image of the sample.

Unnecessary radiation damage

Normally, the same number of incident electrons is fired at each pixel and the number of scattered electrons is counted. To create enough collisions at weakly scattering regions to resolve them properly, strongly scattering regions are exposed to far more incident electrons than necessary. As a result, samples may suffer unnecessary radiation damage.

In the new work, electron microscopists led by Jonathan Peters and Lewys Jones at Trinity College Dublin, together with Bryan Reed of Integrated Dynamic Electron Solutions in the US and colleagues in the UK and Japan, inverted the traditional measurement protocol by measuring the time required to achieve a fixed number of scattered electrons from every pixel. Jones offers an analogy: “If you look at the weather forecast on TV you see the rainfall in millimetres per hour,” he says; “If you look at how that’s measured by weather forecasters they go and put a beaker outside in the rain and, one hour later, they see how much is in the beaker…If I ask you how hard it’s raining, you’re going to go outside, stick your hand out and see how long it takes for, say, three drops to hit your hand…After you’ve reached some fixed [number of drops], you don’t wait for the rest of the hour in the rain.”

Event response

The researchers implemented an event-responsive microscopy protocol in which the individual scattered electrons from each pixel is recorded, and this information is fed back to the electron microscope. After the set number of scattered electrons is recorded from each individual pixel, a “beam blanker” is switched on until the end of the normal pixel waiting time. “A powerful voltage is applied to skew the beam off into the sidewall,” explains Jones. “It has the same effect of opening and closing a shutter on a camera.” This allowed the researchers to measure the scattering rate from all the sample points without subjecting any of them to unnecessary electron flux. “It’s not a slow process,” says Jones; “The image is formed in front of the user in real-time.”

The researchers used their new protocol to produce images of biologically and chemically fragile samples with little to no radiation damage. They now hope it will prove possible to produce electron micrographs of samples such as some catalysts and drug molecules that are currently obliterated by electron beams before an image can be formed. They are also exploring the protocol’s use in other imaging techniques such as electron energy loss spectroscopy and X-ray microscopy. “It will probably take a number of years for us and other groups to fully unpick what such a fundamental shift in how measurements are made will mean for all the other kinds of techniques that people use microscopes for,” says Jones.

Electron microscopy expert Quentin Ramasse of the University of Leeds is enthusiastic about the work. “It’s inventive, it’s potentially changing the way we record data in a STEM and it’s doing so in a very simple fashion. It could provide an extra tool in our arsenal to not necessarily completely remove beam damage but certainly to minimize it,” he says. “It really is [the result of] clever electronics, clever hardware and a very clever take on how to drive the motion of the probe as a function of what the sample’s response has been up to that point.”

The research is described in Science.     

The post ‘Event-responsive’ electron microscopy focuses on fragile samples appeared first on Physics World.

]]>
Research update Pharmaceutical and catalysis research could benefit from new technique https://physicsworld.com/wp-content/uploads/2024/08/6-8-24-Jonathan-Peters-performing-some-experiments-resized.jpg
Tsung-Dao Lee: Nobel laureate famed for work on parity violation dies aged 97 https://physicsworld.com/a/tsung-dao-lee-nobel-laureate-famed-for-work-on-parity-violation-dies-aged-97/ Mon, 05 Aug 2024 15:28:55 +0000 https://physicsworld.com/?p=115888 In the 1950s Lee proposed that "parity" is violated by the weak force

The post Tsung-Dao Lee: Nobel laureate famed for work on parity violation dies aged 97 appeared first on Physics World.

]]>
The Chinese-American particle physicist Tsung-Dao Lee died on 4 August at the age of 97. Lee shared half of the 1957 Nobel Prize for Physics with Chen Ning Yang for their theoretical work that overturned the notion that parity is conserved in the weak force – one of the four fundamental forces of nature. Known as “parity violation”, it was proved experimentally by, among others, Chien-Shiung Wu.

Born on on 24 November 1926 in Shanghai, Lee began studying physics in 1943 at the National Chekiang University (now known as Zhejiang University) and, later, at National Southwest Associated University in Kunming. In 1946 Lee moved to the US to the Univeristy of Chicago on a Chinese government fellowship, doing a PhD under the guidance of Enrico Fermi, which he completed in 1950.

After his PhD, Lee worked at Yerkes Astronomical Observatory in Wisconsin, the University of California at Berkeley and the Institute for Advanced Study at Princeton before moving to Columbia University in 1953. Three years later, he became the youngest-ever full professor at Columbia, remaining at the university until retiring in 2011.

Looking in the mirror

It was at Columbia where Lee did his Nobel-prize-winning work on parity, which is a property of elementary particles that expresses their behaviour upon reflection in a mirror. If the parity of a particle does not change during reflection, parity is said to be conserved. But since the early 1950s, physicists had been puzzled by the decays of two subatomic particles, known as tau and theta.

These particles, also known as K-mesons, are identical except that the tau decays into three pions with a net parity of -1, while a theta particle decays into two pions with a net parity of +1. This puzzling observation meant that either the tau and theta are different particles or – controversially – that parity in the weak interaction is not conserved, with Lee and Yang proposing various ways to test their ideas (Phys. Rev. 104 254).

Wu, who was also working at Columbia, then suggested an experiment based on the radioactive decay of unstable cobalt-60 nuclei into nickel-60. In what became known as the “Wu experiment”, she and colleagues from the National Bureau of Standards used a magnetic field to align the cobalt nuclei with their spins parallel, before counting the number of electrons emitted in both an upward and downward direction.

Wu and her team found that far more electrons were being emitted downwards then upwards, which for parity to be conserved would be the same for both the normal state and in the mirror image. Yet when the field was reversed, as it would be in the mirror image, they found that more electrons were detected upwards, proving that parity is violated in the weak interaction.

For their work, Lee and Ning Yang shared the 1957 Nobel Prize for Physics. Then just 30, Lee was the second youngest Nobel-prize winning scientist after Lawrence Bragg, who was 25 when he shared the 1915 Nobel Prize for Physics with his father, William Henry Bragg. It has been argued that Wu should have shared the prize too for her experimental evidence of parity violation, although the story is complicated because two other groups were also working on similar experiments at the same time.

Influential physicist

Lee went on to publish several books including Particle Physics and Introduction to Field Theory in 1981 and Science and Art in 2000. As well as the Nobel prize, he was also awarded the Albert Einstein Award in 1957 and the Matteucci Medal in 1995.

In the 1980s, Lee initiated the China-US Physics Examination and Application (CUSPEA) programme, which has since helped to train hundreds of physicists. He also was instrumental in the development of China’s first high-energy accelerator, the Beijing Electron-Positron Collider, which switched on in 1989.

Robert Crease, a historian from Stony Brook University who interviewed Lee many times, said that Lee also had a significant influence on the Brookhaven National Laboratory in New York. “He did some of his Nobel work there in the summer of 1956,” says Crease. “Lee and Yang would make regular Friday-afternoon trips to the local Westhampton beach where they would draw equations in the sand. They’d also yell at each other so loudly that others could sometimes hear them down the hall.”

Later, in the 1990s, Lee also played a role in the transition of Brookhaven’s ISABELLE proton-proton collider into the Relativistic Heavy-Ion Collider. “He was a mentor to many people at Brookhaven,” Crease adds. “He was artistic too – he made many sculptures – and was funny. I was honoured when Lee asked me to sign a copy of my edited autobiography of the theorist Robert Serber, who had adored him.”

“His groundbreaking contributions to his field have left a lasting impact on both theoretical and experimental physics,” noted Columbia University President Minouche Shafik in a statement.  “He was a beloved teacher and colleague for whom generations of Columbians will always be grateful.”

At a reception in 2011 to mark Lee’s retirement, William Zajc, chair of Columbia’s physics department, noted that it was “impossible to overstate [Lee’s] influence on the department of physics, on Columbia and on the entire field of physics.”

Lee, on the other hand, noted that retirement is “like gardening”. “You may not be cultivating a new species, but you can still keep the old beautiful thing going on,” he added.

  • A memorial service in honour of Lee will be held at 9.00 a.m. (CST) on 25 August 2024 at the Tsung-Dao Lee Institute in Shanghai, Chaina, with an online stream in both English and Chinese. More information, including an invitation for colleagues to share condolences, photos or video tributes, is available on the Tsung-Dao Lee memorial website.

The post Tsung-Dao Lee: Nobel laureate famed for work on parity violation dies aged 97 appeared first on Physics World.

]]>
News In the 1950s Lee proposed that "parity" is violated by the weak force https://physicsworld.com/wp-content/uploads/2024/08/TD-Lee-at-CERN-small.jpg newsletter
Introducing Python for electrochemistry research https://physicsworld.com/a/introducing-python-for-electrochemistry-research/ Mon, 05 Aug 2024 13:21:33 +0000 https://physicsworld.com/?p=114400 Available to watch now, The Electrochemical Society, in partnership with BioLogic and  Gamry Instruments, explores the advantages of using Python in your electrochemical research

The post Introducing Python for electrochemistry research appeared first on Physics World.

]]>

To understand electrochemical behaviour and reaction mechanisms, electrochemists must analyze the correlation between current, potential, and other parameters, such as in situ information. As the experimental dataset becomes larger and the analysis task gets more complex, one may spend days sorting data, fitting models, and repeating these routing procedures. Moreover, sharing the analyzing procedure and reproducing the results can be challenging as different commercial software, parameters, and steps can be involved. Therefore, an open-source, free, and all-in-one platform for electrochemistry research is needed.

Python is an interpreted programming language that has emerged as a transformative force within the scientific community. Its syntax prioritizes readability and simplicity, allowing easy reproducing and cross-platform sharing. Furthermore, its rich ecosystem of community-provided packages enables multiple electrochemical tasks, from data analysis and visualization to fitting and simulation.

This webinar presents a general introduction to using Python for electrochemists new to programming concepts. Starting with the basic concepts, Python’s capability in electrochemistry research is demonstrated with examples, from data handling, treatment, fitting, and visualization to electrochemical simulation. Suggestions and resources on learning Python are provided.

An interactive Q&A session follows the presentation.

Zheng Weiran

Weiran Zheng is an associate professor in chemistry at the Guangdong Technion-Israel Institute of Technology (GTIIT), China. His research focuses on understanding the activation and long-term deactivation mechanisms of electrocatalysts from an atomic scale using operando techniques such as spectroscopy and surface probe microscopy. He is particularly interested in water electrolysis, ammonia electrooxidation, and sensing. His research also involves a fundamental discussion of current experimental electrochemistry for better data accountability and reproducibility. Weiran Zheng received his BS (2009) and PhD (2015) from Wuhan University. Before joining GTIIT, he worked as a visiting researcher at the University of Oxford (2012–2014) and a research fellow at the Hong Kong Polytechnic University (2016–2021).

The Electrochemical Society

 

The post Introducing Python for electrochemistry research appeared first on Physics World.

]]>
Webinar Available to watch now, The Electrochemical Society, in partnership with BioLogic and  Gamry Instruments, explores the advantages of using Python in your electrochemical research https://physicsworld.com/wp-content/uploads/2024/05/python.png
MR-guided radiotherapy: where are we now and what does the future hold? https://physicsworld.com/a/mr-guided-radiotherapy-where-are-we-now-and-what-does-the-future-hold/ Mon, 05 Aug 2024 12:00:08 +0000 https://physicsworld.com/?p=115858 Speakers at the recent AAPM Annual Meeting examined the clinical impact and future potential of MR-guided radiotherapy

The post MR-guided radiotherapy: where are we now and what does the future hold? appeared first on Physics World.

]]>
Aurora-RT MR-linac

The past few decades have seen MR-guided radiotherapy evolve from an idea on the medical physicists’ wish list to a clinical reality. At the recent AAPM Annual Meeting, experts in the field took a look at three MR-linac systems, the clinical impact of this advanced treatment technology and the potential future trajectory of MR-guided radiotherapy.

Millimetres matter

Maria Bellon from Cedars-Sinai (speaking on behalf of James Dempsey and ViewRay Systems) began the symposium with an update on the MRIdian, an MR-guided radiotherapy system that combines a 6 MV linac with a 0.35 T MRI scanner. She explained that ViewRay Systems was formed in early 2024 to save the MRIdian technology following the demise of ViewRay Technologies.

Bellon described ViewRay’s quest to minimize treatment margins – the region that’s deliberately destroyed outside of the tumour. In radiotherapy, geometric margins are necessarily added to account for microscopic disease or uncertainties. “But millimetres matter when it comes to improving outcomes for cancer patients,” she said.

The MRIdian A3i, the company’s latest platform, is designed to minimize margins and maximize accuracy using three key features: auto-align, auto-adapt and auto-target. Auto-align works by aligning a very sharp beam to high-resolution images of the soft tissues to be targeted or spared. The auto-adapt workflow begins with the acquisition of a high-resolution 3D MRI for localization. Within 30 s, it automatically performs image registration, contour mapping, predicted dose calculation, IMRT plan re-optimization, best plan selection and plan QA.

Once treatment begins, auto-targeting is employed to deal with organ motion. The treatment beam is controlled by the MR images and only turned on when the tumour lies within defined margins. Organ motion can also cause interplay effects, in which the dose distribution contains gaps or areas of overlap that result in hot and cold spots. Larger margins can worsen this effect – another reason to keep them as small as possible.

The MRIdian MR-linac

Bellon shared some clinical studies demonstrating how margins matter. The MIRAGE trial, for example, showed that 2 mm margins and MR-guided radiotherapy resulted in significantly lower toxicity for prostate cancer patients than 4 mm margins and CT guidance. Elsewhere, the multicentre SMART trial treated pancreatic cancer with a 3 mm margin, which improved two-year overall survival with few to no higher-grade GI toxicities.

“This is actual evidence that reducing margins, making them real, controlling them, will improve outcomes for patients,” she noted.

Looking to the future, could sub-millimetre margins be achievable? Bellon described how a new head coil and submillimetre-resolution imaging can enable frameless MRI-guided stereotactic radiosurgery (SRS) on the A3i platform. To date, the team has investigated phantoms and healthy volunteers. “I think that it would be a really great advantage of the system to step into the SRS space,” she said.

“Innovation remains at the forefront for ViewRay Systems as they continue to strive to image faster, image in more directions and planes, and use more automation and innovation to control margins and make them smaller than ever,” said Bellon.

Mitigating motion

The second speaker, Bas Raaymakers from UMC Utrecht, discussed the Elekta Unity, a MR-linac envisaged back in 1999 by Raaymakers and his colleague Jan Lagendijk, and designed and built in collaboration with industrial partners Elekta and Philips.

Unity comprises a ring-gantry mounted linac integrated with a 1.5 T MRI. Raaymakers described some of the clinical opportunities conferred by such MR guidance. For starters, high-precision dose delivery with small margins enables use of a lower number of treatment fractions. For adrenal gland and prostate treatments, the Utrecht team has moved from 20 to five fractions, and is studying ultra-hypofractionation to just one or two.

Precise dose delivery also protects organs-at-risk and could enable delivery of higher doses to hard-to-treat cancers, such as pancreatic or renal cell cancer, where surrounding tissues are highly radiosensitive. “This is the future of MR-guided radiotherapy, this gives all kinds of opportunities that we do not have now,” Raaymakers said.

The Unity can track all types of motion – breathing motion, drifts or sudden movements – in real time and in 3D. The system’s comprehensive motion management (CMM) system performs two orthogonal cine MR scans and then uses these scans to perform gating and intrafraction drift correction (in which the treatment centre is changed to correct for drifts). Treatments with CMM began last year and analysis of the first seven patients showed that the gating works and improves conformality.

Raaymakers described how CMM combined with high soft-tissue contrast enables prostate cancer treatments in five fractions with 2 mm margins. To minimize intrafraction motion, a necessity for such small margins, the Utrecht team developed a regime in which a new plan is created halfway through the treatment. This replanning reduced the residual motion at the end of the treatment enough to enable 2 mm margins.

The team also investigated the use of drift correction halfway through the fraction and found that, dosimetrically, it was same as the replanning approach. “The whole effort of replanning can also be done with drift correction,” said Raaymakers. “Now we can do prostate treatment in 30 minutes, with 2 mm margins. This will be used, and we will explore how we can use drift correction for all types of treatment.”

The Elekta Unity MR-linac

The ultimate aim, Raaymakers said, is to reach the position where we don’t worry about patient motion at all. For example, is it possible to treat a beating heart? As an example, he described the MEGASTAR study of MR-guided stereotactic arrythmia radioablation, in which MRI is used to follow the beating heart and MLC tracking employed to accurately hit the target.

Raaymakers concluded with a look at the future impact of MR-guided radiotherapy. He noted that radiotherapy is a low-cost technology used to treat 50% of all cancer patients and that MR guidance can improve it further, via hypofractionation, smarter workflows, smaller margins and reduced toxicity.

“I think this is an option to shift from invasive treatments towards radiotherapy; we can postpone surgery for certain patients or omit surgery for others,” he said. “This is something we should strive for in MR-guided radiotherapy, to make this message clear to the rest of the oncology world.”

The practical MR-linac

The final speaker in the symposium was Gino Fallone from the Cross Cancer Institute (CCI), the University of Alberta and MagnetTx Oncology Solutions. Fallone introduced the Aurora-RT, a rotating MR-linac that combines a 6 MV linac with a 0.5 T biplanar MRI with a beam stop. The system was first prototyped in 2008 and is now FDA approved and CE Marked.

The unique feature of the Aurora-RT is that it can be used in two configurations: with horizontal magnets and the beam perpendicular to the magnetic field, or vertical magnets with the beam parallel to B0. Fallone noted that the parallel configuration is the clinical product as it significantly reduces dose perturbations and enables large 3D couch shifts

Fallone told the audience why MagnetTx chose to use 0.5 T MRI. Knowing that the system would require fast imaging techniques, such as bSSFP (balanced steady-state free precession), the team assessed the contrast-to-noise ratio for bSSFP at various field strengths, and found that it was greatest at 0.5 T. “While image quality is determined by the signal-to-noise ratio, which does go up with magnetic field, contrast-to-noise is also critical,” he explained.

The Aurora-RT has a wide bore of 110 x 60 cm, reducing patient claustrophobia and increasing throughput. This large opening also enables significant couch motion of ±23 cm in the vertical and lateral directions, allowing treatments to be performed in the same way as conventional radiotherapy and improving the clinical flow. “You can place the target at the planned location every time, you don’t have to do online replanning for every single patient,” said Fallone. “Such a large couch motion also allows isocentric treatment of peripheral targets.”

To track and treat moving organs, the CCI researchers developed a technique called NifteRT, or non-invasive intrafraction tumour tracked radiotherapy. The approach involves MR imaging at 4 frames/s, autocontouring, and tumour motion prediction for each patient. The predicted tumour position is then used to control the MLC to shape and position the beam to the target.

Fallone emphasized that the team employs a lot of AI and deep learning. “This allowed us to do faster imaging without creating distortions, it allowed us to do very accurate segmentation and it allowed us to do tumour tracking with prediction and irradiation,” he explained.

The Aurora-RT was designed with simplicity and cost reduction in mind. The system can be sited in any typically sized vault, installed through the vault door maze, and does not require a cryogen exhaust vent. Because the Aurora-RT has a beamstop, shielding is required only for scattered radiation, reducing site costs. Once installed, the system runs without needing liquid helium or any liquid cryogens, reducing operating costs. The magnet can be turned on or off in minutes, improving research and service operations. It also uses many existing radiotherapy techniques, for example, existing ion chambers, laser setup and table shifts.

Fallone concluded that the Aurora-RT offers increased throughput, decreased claustrophobia, no process changes, significantly reduced dose perturbations for safer delivery and improved MR guidance via use of the 0.5 T “sweet spot”. Simplified installation in any vault, without the need for an exhaust vent or shielding for the primary radiation beam, decreases installation and operating costs.

Prove its worth

Having discussed the advantages of and clinical evidence for MR-guided radiotherapy, the speakers were asked why MR-linacs still only comprise 2% of the market and why users appear slow to adopt this approach.

“Throughput is a constant conversation that we’re having, despite the fact that yearly throughput tends to be high because a lot of treatments can be hypofractionated,” said Bellon. “On-table adaptive is very intimidating for people, but I don’t know why it’s still considered a niche treatment.”

Raaymakers believes that the conservatism of the medical field is working against them. “Right now, it’s a lot of hassle and there’s no proof…We have to prove that it’s really worth it and hopefully then adoption will get faster.”

Fallone suggests that medical physicists are too scared of MR and that MR-linacs are still too expensive. “We know MRI is better than CT, now we have to convince the bosses,” he said. “If you get a better image you will treat better; there’s nothing to prove.”

The post MR-guided radiotherapy: where are we now and what does the future hold? appeared first on Physics World.

]]>
Analysis Speakers at the recent AAPM Annual Meeting examined the clinical impact and future potential of MR-guided radiotherapy https://physicsworld.com/wp-content/uploads/2024/08/5-08-24-MR-linacs.jpg newsletter
Why NASA thinks you should forget about space-based solar power https://physicsworld.com/a/why-nasa-thinks-you-should-forget-about-space-based-solar-power/ Mon, 05 Aug 2024 09:14:05 +0000 https://physicsworld.com/?p=115701 James McKenzie thinks a new NASA report marks the end of the road for space-based solar power

The post Why NASA thinks you should forget about space-based solar power appeared first on Physics World.

]]>
The other day I was watching the hugely entertaining Amazon Prime documentary series Clarkson’s Farm, which depicts the broadcaster Jeremy Clarkson’s attempts to run a farm in Oxfordshire. In one episode, Clarkson is named the National Farming Union’s “farming champion for 2021” for highlighting the challenges farmers face in making a living from the land. Particularly difficult for him are the rules that let local planning officials stop him from doing stuff that he feels ought to be allowed.

Clarkson appealed against some of the decisions and eventually won his case. But his experience inspired me to look into the UK’s planning system to see how objections have slowed the progress of wind farms and solar farms to a snail’s pace. Despite it being government policy to deploy more of these renewable forms of energy, I soon discovered that the country’s thorough but overly bureaucratic planning process is being hijacked by the “not in my back yard” (NIMBY) brigade.

Space-based solar power is not a new idea of course, first being mooted in a 1941 science-fiction short story by Isaac Asimov

These are people who want all the benefits and upsides of renewable energy systems – so long as they’re installed somewhere else, well out of eyeshot. One comment I read even suggested that the best place for solar power farms would be in space. Having written about the favourable economics of photovoltaic panels and the unfavourable economics of  “solar concentrators”, I immediately wondered if “space-based solar power” could stack up financially let alone technically, especially in such an extreme and unforgiving environment.

Space-based solar power is not a new idea of course, first being mooted in a 1941 science-fiction short story by Isaac Asimov called Reason. It sounds simple in principle: all you have to do is place a solar array at a location in space where the Sun always shines. You then convert the electrical energy from the solar cells into microwaves and beam them to a ground station down on Earth, where they can be collected and turned into electricity for the grid.

Because the Sun’s always shining on the array, the electricity’s permanently on tap and there’s no need for storage. The upshot is that such an array – if it were ever built – would count as baseload generation like a coal, gas or nuclear plant. The UK government is certainly taking the idea seriously, having commissioned an independent report from Frazer-Nash Consultancy into space-based solar power back in 2021.

As Physics World discussed at the time in a news story and feature, the report examined two main concepts – the US-led SPS Alpha and the UK-led CASSIOPeiA. The report called for a thorough cost and economic analysis of both options, which surely is the whole point. The best way of doing this would be by using the “levelized cost of energy” (LCOE), which compares different energy-generation technologies taking all the various costs into account.

Given how many cool and fantastic technical ideas can be dashed on the rocks of reality by economics, I was intrigued to find that the UK government’s feasibility report had already crunched through the numbers. It said that space-based solar power has a 2050 projected LCOE of £50/MWh compared to £33/MWh for Earth-based solar farms (as of 2023 this sat at £41/MWh) and £96/MWh for large nuclear reactors.

Technical challenges

As far as the CASSIOPeiA project is concerned, it would consist of a 2000 tonne satellite roughly 1.7 km in diameter flying in a geosynchronous orbit about 35,800 km about the Earth. It would generate 3 GW of electricity that would be converted into microwaves with a frequency of 2.45 GHz, which can pass largely unhindered through the atmosphere and any clouds.

Getting 2000 tonnes of payload up into space wouldn’t be easy or cheap. The report estimates we’d need about 68 SpaceX Starship launches – an ambitious goal given that, at the time of writing, the company has never launched one of these rockets, let alone reused them. Although I am confident that SpaceX will succeed and that launch costs will fall, building a huge satellite of that kind isn’t the main hurdle.

CASSIOPeiA would also need a ground antenna receiver and grid interface in the form of an elliptical microwave receiver about 6.7 km by 13 km in size. Delivering 2 GW of power into the grid day and night, each receiver would be roughly equivalent to a single large nuclear power station. Overall, development costs are estimated to be an eye-watering £16.3bn and it would take 18 years to deploy. Still, assuming CASSIOPeiA is funded, and that all goes to plan, it could be powering your toaster and TV by 2042.

It should come as no surprise, though, that there are plenty of big technical challenges, with the UK report ranking 10 of the 13 crucial subsystems for the satellite of “high” or “very high” technical difficulty. As a high-level report, it naturally glosses over the practical details, but a more in-depth study has been carried out by Henri Barde, a retired engineer who used to work for the European Space Agency (ESA) in Noordwijk, the Netherlands.

Published by the IEEE in its proceedings of the European Space Power Conference 2023, Barde’s report looks at issues, such as how to cool solar cells and microwave systems that have gigawatts of power coursing through them. It also examines how to deal with temperature swings of about 300 oC a couple dozen times a year as the satellite passes suddenly across the Earth’s shadow (so, no, they’re not actually “on all the time”). Barde gives an overview of the huge, if not insurmountable, technical challenges in the June 2024 issue of IEEE Spectrum.

Further cold water was poured on space-based solar power by NASA, which earlier this year published a detailed report from its Office of Technology, Policy and Strategy

As for the ground-based microwave receiver, it would have a power density of about 29 W/m2 for the 2 GW produced and require about a third of the area of a conventional Earth-based solar-power plant. Given that the UK, where I am based, has an average solar power density of 10 W/m2, we could get the same output with only about three solar plants. What’s more, the solar cells in such plants could (unlike in a microwave antenna) be sensibly spread out over a large area. That sounds very appealing, especially as there is zero technical risk and we could have them now (or nearly now), planning permitting.

NASA weighs in

Further cold water was poured on space-based solar power by NASA, which earlier this year published a detailed 91-page report from its Office of Technology, Policy and Strategy. In worrying news for the technology’s supporters, it concludes that a 2 GW solar-space facility would, by 2050, be “more expensive than terrestrial alternatives and may have lifecycle costs per unit of electricity that are 12–80 times higher”. Even the cheapest system, NASA says, would cost hundreds of billions of dollars.

The NASA report also assesses the overall emissions, in terms of equivalent carbon-dioxide emissions per kilowatt-hour, and reckons that space-based solar power would be higher than terrestrial alternatives. Although NASA admits the costs could be improved with investment, it diplomatically concludes “cost competitiveness may be achieved through a favourable combination of cost and performance improvements related to launch and manufacturing beyond the advancements assumed in the baseline assessment”.

Which, to me, sounds like a polite way of saying “we’re right, but if you think you can do better, go knock yourself out!”.

Solar energy does, however, have lots of potential, with 87% of the world’s nations able to power themselves using less than 5% of their land. The UK is not one of those lucky countries: one-eighth of the whole nation would have to be blanketed with solar panels to power itself. That’s a huge area, given that 6% of the country is already built on, although efficiency improvements are on the way with perovskite solar cells, which will help a bit.

But the economics contained in the NASA report are surely the end of the debate for space-based solar power. If I were spending my own money, I would much rather invest it in lots of terrestrial solar farms. After all, there’s no risk involved and it’s much cheaper. And you don’t need to take my word for it: the benefits of solar power were fully laid out last year in Tesla Corporation’s influential Master Plan 3.

The only snag seems to be getting planning permission to build those solar plants right now from local planning officials. In fact, wouldn’t it be great if Clarkson had a go at deploying a couple of fields full of photovoltaic solar panels at his farm in future episodes. That would test the reality of the situation – and make entertaining TV too.

The post Why NASA thinks you should forget about space-based solar power appeared first on Physics World.

]]>
Opinion and reviews James McKenzie thinks a new NASA report marks the end of the road for space-based solar power https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Transactions-solar-panels-in-space.jpg newsletter
Sound waves move objects in liquid https://physicsworld.com/a/sound-waves-move-objects-in-liquid/ Mon, 05 Aug 2024 08:00:39 +0000 https://physicsworld.com/?p=115833 New technique might be used in applications such as targeted drug delivery, micro-robotics and even additive manufacturing

The post Sound waves move objects in liquid appeared first on Physics World.

]]>

Researchers in Switzerland have found a way of using sound waves to manipulate objects in disordered environments such as liquids. Instead of trapping the objects as conventional optical and acoustic tweezers do, the new method guides them using pressure waves, and its developers at the Swiss Federal Institute of Technology in Lausanne (EPFL) say it could be useful for biomedical applications such as targeted drug delivery.

Optical tweezers were invented by the American physicist Arthur Ashkin, who shared the 2018 Nobel Prize for Physics for his role in their development. In these devices, a highly focused laser beam generates optical forces that hold and move micron- or nano-sized objects near the beam’s focus, where the electric field gradient is highest. Their acoustic counterparts work in a similar way, using ultrasonic waves to trap and move objects by creating focalization spots and vortices.

Both techniques are powerful tools for biological research and quantum optics. However, for them to work at their best, the medium through which the object moves must be strictly controlled. The new method overcomes this restriction because it does not require focusing the acoustic waves in the same way, explains Romain Fleury, who heads the Laboratory of Wave Engineering in EPFL’s School of Engineering.

“Instead of forming a vortex to trap and manipulate objects, the idea we developed is to create a hot spot in the pressure field that iteratively pushes the element to the target location, as a hockey stick pushes its puck,” says Fleury. “We call this technique wave-momentum shaping.”

Varying amplitude and phase

To implement their method, Fleury and colleagues generated sound waves at audible frequencies using an array of loudspeakers situated at either end of a tank filled with water. The target of these waves was a plastic ping-pong ball floating on the surface. By varying the amplitude and phase of the sound waves, the researchers were able to vary the wavefronts that reached the ball. These waves then interact with the medium and cause the ball to move.

Photo of the experimental setup

The researchers monitored the ball’s movement by using an array of microphones to detect the sound waves scattered off it. After sending out three random wavefronts and measuring the scattering matrix for slightly different configurations of the ping-pong ball, the researchers had enough information to deduce the optimal momentum of the acoustic wavefronts they needed to send to translate or rotate the ball however they wanted. They then repeated the procedure to move the ball across the tank, guiding it around obstacles on the way.

“We were very thrilled when the technique worked for the first time,” Fleury tells Physics World.

Promising for non-invasive biomedical applications

While the researchers demonstrated their method with water and a ping-pong ball, Fleury says it will also work for non-spherical objects in more complex, uncontrolled environments – including some found in the human body. This makes it particularly promising for non-invasive biomedical procedures such as delivering drugs to tumour cells or moving cells around using microrobots.

The researchers are considering possible applications in additive manufacturing, too. For example, it might be possible to use the new acoustic tweezer method to arrange microparticles in specific patterns before solidifying them into complex parts.

In this study, which is detailed in Nature Physics, Fleury and colleagues used audible sound waves to move a macroscopic floating object. Their next goal is to miniaturize the technique so that they can implement it in a microscope with micron-sized objects immersed in liquid. They have received funding from the Swiss National Science Foundation (SNSF) to do this, and to lay the foundations for future applications in micro-robotics and biology.

The post Sound waves move objects in liquid appeared first on Physics World.

]]>
Research update New technique might be used in applications such as targeted drug delivery, micro-robotics and even additive manufacturing https://physicsworld.com/wp-content/uploads/2024/08/Low-Res_SM1_Moving-an-object-despite-disorder.jpg
Rumours spread like nuclear fission, say physicists https://physicsworld.com/a/rumours-spread-like-nuclear-fission-say-physicists/ Sun, 04 Aug 2024 14:10:24 +0000 https://physicsworld.com/?p=115875 Neutrons are rumours and people are uranium isotopes in new model

The post Rumours spread like nuclear fission, say physicists appeared first on Physics World.

]]>
It is no coincidence that “going viral” is used to describe how ideas spread on social media. Researchers have long used models of infectious disease to understand how information – and indeed misinformation – is rapidly disseminated.

But, according to the physicist Wenrong Zheng, these models can struggle to accurately describe how rumours spread.

“Infectious disease models mostly view the spread of rumours as a passive process of receiving infection, thus ignoring the behavioural and psychological changes of people in the real world, as well as the impact of external events on the spread of rumours,” explains Zheng, who is based at China’s Shandong Normal University.

To address this shortcoming, Zheng teamed up with Fengming Liu and Yingping Sun to create a model of how rumours spread that is inspired by the chain reaction of nuclear fission (atom splitting). This process begins with a uranium nuclei spontaneously splitting into two smaller nuclei and several neutrons. If these neutrons are absorbed by other uranium nuclei, it is more likely that those nuclei will split – thus setting off a chain reaction of fission.

The two most common isotopes of uranium are uranium-238 and uranium-235. The former must absorb multiple neutrons before it will split, whereas the latter will split after just one absorption.

Multiple reception

In the trio’s model, a neutron travelling through a piece of uranium is the rumour. A uranium-235 nucleus is a person who immediately disseminates the rumour upon receiving it. A uranium-238 nucleus is a person who must receive the rumour several times before disseminating it.

“When individuals encounter rumours, they are influenced by their personal interests and decide whether to spread or whether repeated exposure is needed before spreading,” explains Zheng. “Based on different considerations of uranium fission thresholds, individuals are divided into groups based on the influence of their own interest thresholds, fully considering individual behaviour and differences, which is more in line with the reality.”

The researchers conclude that their model is better than some infectious disease models at mimicking the real-life spreading of rumours. They also suggest that rumours often spread slowly at first, which could mean that the spread of misinformation could be countered.

Their model is described in AIP Advances.

The post Rumours spread like nuclear fission, say physicists appeared first on Physics World.

]]>
Blog Neutrons are rumours and people are uranium isotopes in new model https://physicsworld.com/wp-content/uploads/2024/08/4-8-24-rumours-as-fission.jpg
Smashing heavier ions creates superheavy livermorium https://physicsworld.com/a/smashing-heavier-ions-creates-superheavy-livermorium/ Sat, 03 Aug 2024 14:13:16 +0000 https://physicsworld.com/?p=115866 New technique brings an island of stability closer

The post Smashing heavier ions creates superheavy livermorium appeared first on Physics World.

]]>
Physicists have used a beam of titanium-50 to create the element livermorium. This is the first time that nuclei heavier than calcium-48 have been used to synthesize a superheavy element. The international team, led by Jacklyn Gates at Lawrence Berkeley National Laboratory (LBNL) in California, hopes that their approach could pave the way for the discovery of entirely new elements.

Superheavy elements are found at the bottom right of the periodic table and have atomic numbers greater than 103. Creating and studying these huge elements pushes our experimental and theoretical capabilities and provides new insights into the forces that hold nuclei together.

Techniques for synthesizing these elements have vastly improved over the decades, and usually involve the irradiation of actinide targets (elements with atomic numbers between 89–102) with beams of transition metal ions.

Earlier in this century, superheavy elements were created by bombarding actinides with beams of calcium-48. “Using this technique, scientists managed to create elements up to oganesson, with an atomic number of 118,” says Gates. Calcium-48 is especially suited for this task because of its highly stable configuration of protons and neutrons, which allows it to fuse effectively with target nuclei.

Short-lived and difficult

Despite these achievements, the discovery of new superheavy elements has stalled. “To create elements beyond oganesson, we would need to use targets made from einsteinium or fermium,” Gates explains. “Unfortunately, these elements are short-lived and difficult to produce in large enough quantities for experiments.”

To try to move forward, physicists have explored alternative approaches. Instead of using heavier and less stable actinide targets, researchers considered how lighter, more stable actinide targets such as plutonium (atomic number 94) would interact with beams of heavier transition metal isotopes.

Several theoretical studies have proposed that new superheavy elements could be produced using specific isotopes of transition metals, such as titanium, vanadium, and chromium. These studies largely agreed that titanium-50 has the highest reaction cross-section with actinide elements, giving it the best chance of producing elements heavier than oganesson.

However, there is significant uncertainty surrounding the nuclear mechanisms involved in these reactions, which have hindered experimental efforts so far.

Theoretical decrease

“Based on theoretical predictions, we expected the production rate of superheavy elements to decrease when beams beyond calcium-48 were used to bombard actinide targets,” Gates explains. “However, we were unsure about the extent of this decrease and what it would mean for producing elements beyond oganesson.”

To address this uncertainty, Gates’ team implemented a reaction that has been explored in several theoretical studies – by firing a titanium-50 beam at a target of plutonium-244. Based on the nuclear mechanisms involved, this reaction has been predicted to produce the superheavy element livermorium, which has an atomic number of 116.

To create the titanium-50 beam, the researchers used LBNL’s VENUS ion source. This uses a superconducting magnet to contain a plasma of highly ionized titanium-50. They then accelerated the ions using LBNL’s 88-Inch Cyclotron facility. After the reaction, the Berkeley Gas-filled Separator isolated livermorium nuclei from other reaction products. This allowed the team to measure the chain of products created as the nuclei decayed.

Altogether, the team detected two decay paths that could be attributed to livermorium-290. This is especially significant because the isotope is thought to lie tantalizingly close to and “island of stability” in the chart of the nuclides. This comprises a group of superheavy nuclei that physicists predict are highly resistant to decay through spontaneous fission. This gives these nuclei vastly longer half-lives compared with lighter isotopes of the same elements.

If the island is reached, it could be a crucial stepping stone for synthesizing new elements beyond oganesson. For now, Gates’ team is hopeful its result could pave the way for a new wave of experiments and plan to use their titanium-50 beam to bombard a heavier target of californium-249. If these experiments see similar levels of success, they could be a crucial next step toward discovering even heavier superheavy elements.

The research is described in a preprint on arXiv.

The post Smashing heavier ions creates superheavy livermorium appeared first on Physics World.

]]>
Research update New technique brings an island of stability closer https://physicsworld.com/wp-content/uploads/2024/08/3-8-24-LBNL-nuclear-physics.jpg
Vera C Rubin Observatory’s secondary mirror successfully installed https://physicsworld.com/a/vera-c-rubin-observatorys-secondary-mirror-successfully-installed/ Fri, 02 Aug 2024 14:45:10 +0000 https://physicsworld.com/?p=115861 The Vera C Rubin Observatory will conduct a decade-long survey of the southern hemisphere sky when operational in 2025

The post Vera C Rubin Observatory’s secondary mirror successfully installed appeared first on Physics World.

]]>
The secondary mirror belonging to the Simonyi Survey Telescope has been installed at the Vera C Rubin Observatory, which is based in Cerro Pachón in the Andes.

At 3.5 m in diameter, the secondary mirror is one of the largest convex mirrors ever made and is the first permanent component of the observatory’s optical system to be installed.

The glass mirror was made by Corning Advanced Optics and then polished by L3Harris Technologies, both based in New York.

The Vera C Rubin Observatory will conduct a decade-long survey of the southern hemisphere sky, which is known as the Legacy Survey of Space and Time (LSST). The main component is the LSST camera – a 3200 megapixel instrument – that has taken almost two decades to build.

Engineers will soon begin re-installing the Commissioning Camera, which is a smaller version of the LSST that will be used to test the optical systems including both primary and secondary mirrors.

The observatory’s 8.4 m primary mirror will be installed later this month before the LSST Camera is added before the end of the year.

Sandrine Thomas, deputy director for Rubin Observatory Construction, says that the installation of the secondary mirror feels like entering “the home stretch” towards completion. “Now we have glass on the telescope [it] brings us a thrilling step closer to revolutionary science with Rubin,” she says.

The observatory is expected to begin observing the universe next year.

The post Vera C Rubin Observatory’s secondary mirror successfully installed appeared first on Physics World.

]]>
Blog The Vera C Rubin Observatory will conduct a decade-long survey of the southern hemisphere sky when operational in 2025 https://physicsworld.com/wp-content/uploads/2024/08/02-08-24-Vera-Rubin-small.jpg
Twisted carbon nanotubes store more energy than lithium-ion batteries https://physicsworld.com/a/twisted-carbon-nanotubes-store-more-energy-than-lithium-ion-batteries/ Fri, 02 Aug 2024 13:00:44 +0000 https://physicsworld.com/?p=115821 Mechanical energy storage could be a safer way of powering some medical devices

The post Twisted carbon nanotubes store more energy than lithium-ion batteries appeared first on Physics World.

]]>
Mechanical watches and clockwork toys might seem like relics of a bygone age, but scientists in the US and Japan are bringing this old-fashioned form of energy storage into the modern era. By making single-walled carbon nanotubes (SWCNTs) into ropes and twisting them like the string on an overworked yo-yo, Katsumi Kaneko, Sanjeev Kumar Ujjain and colleagues showed that they can store twice as much energy per unit mass as the best commercial lithium-ion batteries. The nanotube ropes are also stable at a wide range of temperatures, and the team say they could be safer than batteries for powering devices such as medical sensors.

SWCNTs are made from sheets of pure carbon just one atom thick that have been rolled into a straw-like tube. They are impressively tough – five times stiffer and 100 times stronger than steel – and earlier theoretical studies by team member David Tománek and others suggested that twisting them could be a viable means of storing large amounts of energy in a compact, lightweight system.

Making and measuring nanotube ropes

To confirm this, the team needed to overcome two challenges. The first was finding the best way of making energy-storing ropes from commercially-available SWCNT materials. After testing various methods, the team settled on a yarn-like rope treated with thermoplastic polyurethane, which accelerates the elastic deformation of individual nanotubes and improves their ability to “share the load” with others.

The second challenge was to measure energy stored in ropes which, at only microns in diameter, are much thinner than a human hair. “This small size made it hard to handle and measure them accurately,” says Kumar Ujjain, an assistant research scientist at the University of Maryland-Baltimore County’s Center for Advanced Sensor Technology (UMBC-CAST) who began the project while working with Kaneko at Shinshu University.

The team’s solution was to develop an instrument that combines a motor for twisting the sample with a laser displacement gauge to measure how much torque the strained rope exerts. By adding a microscope and high-speed camera, the scientists could track how much force and twisting the ropes experienced in real time. “This precise measurement was crucial for determining how much energy the ropes could store,” Kumar Ujjain says.

To measure the stored energy, the scientists added a load to the twisted rope and monitored its rotation as the rope unwound. The maximum gravimetric energy density (that is, the energy available per unit mass) they measured was 2.1 MJ/kg (583 Wh/kg). While this is lower than the most advanced lithium-ion batteries, which last year hit a record of 700 Wh/kg, it is much higher than commercial versions, which top out at around 280 Wh/kg. The SWCNT ropes also maintained their performance over at least 450 twist-release cycles, and Kumar Ujjain says they have other advantages, too.

“Storing energy in mechanically twisted carbon nanotube ropes is generally safer than using chemical energy storage, such as in lithium-ion batteries, which can pose risks like fires or explosions,” he explains. “The energy in these twisted ropes is purely mechanical and doesn’t involve hazardous chemicals.”

Managing and exploiting stored energy

One possible application for a chemically safe, biocompatible energy-storage system would be in medical sensors. The UMBC-CAST team is developing a stretchable, porous CO2 sensor that can be applied directly to a patient’s skin, and Kumar Ujjain says that a micro-generator based on twisted nanotube ropes could be a good way of powering it. Getting to that point will, however, require additional research focused on scaling up the nanotube ropes, integrating them with existing devices, and above all, developing mechanisms for releasing the stored energy in a controlled, predictable way.

“There is a risk if the ropes are twisted too tightly,” Kumar Ujjain explains. “In such cases, the tension could suddenly release, like an over-tightened spring in a clockwork watch, potentially causing damage.” Proper handling and safety measures should, he says, make this risk manageable.

The study is described in Nature Nanotechnology.

The post Twisted carbon nanotubes store more energy than lithium-ion batteries appeared first on Physics World.

]]>
Research update Mechanical energy storage could be a safer way of powering some medical devices https://physicsworld.com/wp-content/uploads/2024/08/02-08-2024-Futuristic-CNT-ropes_Credit-Shigenori-UTSUMI_web.jpg newsletter1
Icy exoplanet found to be potentially habitable https://physicsworld.com/a/icy-exoplanet-found-to-be-potentially-habitable/ Fri, 02 Aug 2024 08:00:50 +0000 https://physicsworld.com/?p=115840 Researchers discover that a temperate exoplanet may have an atmosphere, could be covered in ice, and may even have an ocean of liquid water

The post Icy exoplanet found to be potentially habitable appeared first on Physics World.

]]>
A research team headed up at the University of Montreal has discovered that the temperate exoplanet LHS 1140 b may have an atmosphere, could be covered in ice, and may even have an ocean of liquid water. If confirmed, this would make it only the third known planet in its host star’s habitable zone to have an atmosphere, after Earth and Mars.

Profound implications

LHS 1140 b, discovered in 2017, is a highly studied exoplanet with observations obtained by several telescopes, including the Transiting Exoplanet Survey Satellite (TESS), the Hubble Space Telescope, the Spitzer Space Telescope and the ESPRESSO spectrograph on the ESO/Very Large Telescope.

Earlier this year, the Montreal-led team reanalysed existing observations to update and refine a range of parameters, including the planet’s radius and mass. The researchers found that its density is inconsistent with a purely Earth-like rocky interior, suggesting the presence of a hydrogen envelope or a water layer atop a rocky, metal-rich core.

They then embarked on a further study of the nature of the exoplanet with the NIRISS (near-infrared imager and slitless spectrograph) instrument on the James Webb Space Telescope (JWST), aiming to distinguish between the “mini-Neptune” or “water world” scenarios.

Presenting the results in The Astrophysical Journal Letters, the astronomers describe how they studied the atmosphere of LHS 1140 b using the transmission spectroscopy technique, which involves observing a planet as it transits in front of its host star.

“Our findings indicate that LHS 1140 b’s atmosphere is not dominated by hydrogen, with the most likely scenario being a nitrogen-rich atmosphere consistent with the water world hypothesis,” says lead author Charles Cadieux, a PhD student at the University of Montreal’s Trottier Institute for Research on Exoplanets, supervised by René Doyon.

Charles Cadieux

“By collecting this light at different wavelengths using a spectrograph – in this case, the NIRISS instrument on JWST – we can infer the atmospheric composition. Molecules such as H2O, CH4 [methane], CO, CO2 and NH3 [ammonia] all absorb light at specific wavelengths, allowing us to identify their presence, or absence,” Cadieux explains.

“Looking at the whole exoplanet population, no atmosphere on a rocky, terrestrial exoplanet has yet been detected to date, this is hard. Our tentative result of a nitrogen-rich atmosphere on LHS 1140 b, if firmly confirmed by additional observations, would be the first such detection,” he adds.

Cadieux notes that this discovery would place LHS 1140 b as only the third known planet with an atmosphere in the habitable zone of its host star, alongside Earth and Mars, confirming that the implications for future research are “profound” and “would provide a target for studying the habitability and the potential for life to exist on rocky, water-rich exoplanets around low-mass stars”.

Super-Earth

According to Cadieux, the next step is to repeat the observations with JWST to confirm the tentative detection of a nitrogen-rich atmosphere. While the current observations use the NIRISS instrument, the team also plans to use the NIRSpec (near infrared spectrograph) instrument on JWST, which extends further into the infrared and can probe the CO2 content of the atmosphere.

Understanding the CO2 content is crucial, as CO2’s greenhouse effect controls surface temperature and the potential size of a liquid water ocean on LHS 1140 b. Cadieux notes that clear detection of CO2 will require two to three years of observations with JWST and should provide definitive proof that LHS 1140 b is a super-Earth with a significant water reservoir.

One key challenge, besides securing JWST observation time, will be addressing stellar contamination in the transmission data. “Since LHS 1140 b orbits a smaller and cooler M-type star, stellar spots on the star’s surface can form molecules like water, which can be misinterpreted as a planetary signal,” Cadieux explains. “Even with additional data in the future, we must carefully correct for stellar contamination to ensure accurate results.”

The post Icy exoplanet found to be potentially habitable appeared first on Physics World.

]]>
Research update Researchers discover that a temperate exoplanet may have an atmosphere, could be covered in ice, and may even have an ocean of liquid water https://physicsworld.com/wp-content/uploads/2024/08/2-08-24-exoplanet-LHS-1140-b.jpg newsletter1
From new physics to sustainability, particle physics looks to the future https://physicsworld.com/a/from-new-physics-to-sustainability-particle-physics-looks-to-the-future/ Thu, 01 Aug 2024 16:52:18 +0000 https://physicsworld.com/?p=115837 Katherine Skipper reports on 2024's International Conference on High Energy Physics in Prague

The post From new physics to sustainability, particle physics looks to the future appeared first on Physics World.

]]>
Two weeks ago, I travelled to Prague in the Czech Republic, where more than 1400 scientists had descended on the city for one of the biggest events in the particle-physics calendar. I was attending the International Conference on High Energy Physics (ICHEP), which takes place every two years and attracts scientists from all over the world.

Almost 1000 parallel talks were crammed into the first three days of the week-long conference, and with 13 parallel sessions, I kept myself fit running between floors to catch everything I wanted to see. I tried to see a bit of everything, from Higgs physics to neutrinos and from future colliders to cosmology.

As well as presentations of scientific results, the event featured discussions about equality diversity and inclusion (EDI), education and outreach, all relatively new additions to the conference programme.

This was also the first ICHEP with a dedicated stream on sustainability. I spoke to Jorgen D’Hondt, a professor at Vrije University Brussels, who delivered a presentation about the “Innovate for Sustainable Accelerator Systems” project. Climate change has brought energy consumption to the forefront of all our minds and particle physics is no exception. D’Hondt argued that particle physicists have a responsibility to make their experiments, many of which will operate decades into the future, as efficient as possible.

“I don’t think people in the 2040s will ask us whether we are using green energy. Because by that time most of the energy will be green” said D’Hondt. “I believe the question of that generation will be can we as scientists demonstrate that our facilities have done all they can to reduce our energy footprint.”

After three hectic days of parallel sessions, I was grateful that the second half of the conference gave me a chance to rest my feet. We settled into a huge theatre for the plenary talks, which were longer than the parallel sessions, but each speaker still had the difficult task of condensing several years of research in their field into a 25-minute slot.

When I was asked by attendees what I was doing at ICHEP, my stock answer was that I was there to discover where the next big discovery – on a par with the Higgs boson – will come from. It’s no surprise therefore that my ICHEP highlight was a panel debate on future colliders.

The discussion featured the heads of four of the world’s biggest physics labs: Fabiola Gianotti (CERN), Lia Merminga (Fermilab in the US), Yifang Wang (from the Institute of High Energy Physics (IHEP) in China) and Shoji Asai (the High Energy Accelerator Research Organization (KEK) in Japan.

From dark energy to the asymmetry between matter and antimatter, we’ve discovered that the standard model has some serious limitations, and the search is on for new physics. The big question for the panel was where, and who, this will come from.

From what I saw, the frontrunner experiment is undoubtably a “Higgs factory”. Higgs bosons have been linked to many of the puzzling gaps in the Standard Model of particle physics and a Higgs factory is a collider that would produce them at a high rate, allowing their properties to be measures with great precision.

All the organizations represented on the panel have expressed interest in building such an experiment, and Wang was particularly optimistic about China’s ability to follow through on their proposal, the Circular Electron Positron Collider (CEPC), saying “we have a mission to be one of the world leaders in particle physics”.

He also acknowledged that this could put IHEP in competition with CERN, which has its own proposal for a similar experiment called the Future Circular Collider (FCC). Perhaps the elephant in the room during this discussion was the impact that future diplomatic relations, particularly between the US and China, could have on these collider projects – something that came up frequently in other discussions during the conference.

Not everyone I spoke to was convinced that the Higgs factory should be prioritized by the field, and I saw many innovative proposals for smaller experiments. What makes particle physics so exciting is that while we’re pretty sure new physics is out there, no-one knows for sure where we should be looking for it.

While some are impatient for another big break, it’s only been 12 years since the discovery of the Higgs boson – a relatively short time in a field that is always looking several decades into the future. I’ll be keeping a watchful eye for big decisions in the next few years, as high-energy physics finds its next steps.

The post From new physics to sustainability, particle physics looks to the future appeared first on Physics World.

]]>
Blog Katherine Skipper reports on 2024's International Conference on High Energy Physics in Prague https://physicsworld.com/wp-content/uploads/2024/08/20240723_145224-scaled.jpg newsletter
Non-physicists find opportunity in the quantum industry, improving the university experience https://physicsworld.com/a/non-physicists-find-opportunity-in-the-quantum-industry-improving-the-university-experience/ Thu, 01 Aug 2024 13:47:56 +0000 https://physicsworld.com/?p=115824 We also chat about the 42nd International Conference on High Energy Physics

The post Non-physicists find opportunity in the quantum industry, improving the university experience appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features an interview with Margaret Arakawa. She is chief marketing officer at IonQ – which makes trapped ion quantum computers. An economist by training, Arakawa spent 25 years in the (classical) computing industry before joining IonQ. We chat about why she made the move to the quantum sector and about the wide range of opportunities for non-physicists in the quantum-technology industry.

Arakawa also talks about the challenges of marketing quantum technology to customers who might not understand the underlying physics and explains why the quantum industry must avoid hype.

Our second guest is Nat Mendelsohn, who represents the English Midlands on the Institute of Physics’ Student Community Panel. He talks to Physics World’s Katherine Skipper about the student experience – what is good and what can be improved. He also explains how the COVID-19 pandemic continues to have a profound impact on higher education.

Finally, I chat with Skipper about her trip to Prague for the 42nd International Conference on High Energy Physics. High on the agenda was what collider of the future will be the successor of the Large Hadron Collider.

The post Non-physicists find opportunity in the quantum industry, improving the university experience appeared first on Physics World.

]]>
Podcasts We also chat about the 42nd International Conference on High Energy Physics https://physicsworld.com/wp-content/uploads/2024/08/Margaret-and-Nat-list.jpg newsletter
Spins hop between quantum dots in new quantum processor https://physicsworld.com/a/spins-hop-between-quantum-dots-in-new-quantum-processor/ Wed, 31 Jul 2024 15:54:25 +0000 https://physicsworld.com/?p=115808 Hopping-based logic achieved at high fidelity

The post Spins hop between quantum dots in new quantum processor appeared first on Physics World.

]]>
A quantum computing system that uses the spins of holes in germanium quantum dots has been unveiled by researchers in the Netherlands. Their platform is based on a 1998 proposal by two pioneers of quantum computation theory and could offer significant advantages over today’s technologies.

Today, the leading platforms for quantum bits (qubits) are superconducting quantum circuits and trapped ions, with neutral atoms trailing slightly behind. However, all of these qubits are difficult to scale-up and integrate to create practical quantum computers. For example IBM’s Condor – perhaps today’s most powerful quantum computer – uses 1121 superconducting qubits. These must all be kept at millikelvin temperatures, and as the number of qubits grows towards the tens of thousands or even millions, the energy cost and engineering challenges of keeping processors cold become daunting. Other platforms present other scaling challenges that are as significant.

Quantum dot qubits were proposed in 1998 by Daniel Loss of the University of Basel and David DiVicenzo, then at IBM. The qubit state is defined by the quantum state of a single charge on a semiconductor quantum dot, and shuttling the charge between quantum dots allows quantum gate operations to be performed. These systems would need less cooling, and they could potentially be fabricated in semiconductor foundries.

Quantum well

Menno Veldhorst of QuTech in the Netherlands, who co-led this latest research, describes the architecture of the quantum dots. “First, we have a semiconductor heterostructure,” he explains, “This is layers of silicon, silicon–germanium and germanium in our case. And then we have a 2D sheet of germanium, which is the quantum well that we’re interested in. We can confine charges in that 2D sheet, which is maybe 60 nm thick. And then we have electric gates on top so that, if we apply a voltage to those gates, we can define a potential well in which single charges can be trapped.”

In principle, this is a very attractive design, creating a transistor that is a qubit. In practice, however, it has been very difficult to achieve. Loss and DiVicenzo originally envisaged a series of adjacent quantum dots, all subject to different magnetic fields.

“If you have two quantum dots and you put a charge in one of them it will go to its ground state,” says Veldhorst. “If I want to do a qubit operation, say rotating the spin, I can flip it to another quantum dot. But in the other quantum dot, the ground state will be pointing in a different direction. What happens as a result is that the qubit starts to make oscillations as it precesses around this new quantization axis. If I wait a certain amount of time and then pop back to the original dot, I may have flipped the spin state completely.” This procedure would allow arbitrary qubit rotations to be performed with simple applied voltages.

The challenge is creating different magnetic fields on adjacent quantum dots. For the past two decades groups such as Veldhorst’s have applied high-frequency oscillating external magnetic fields to manipulate the spins of electrons, usually in silicon, as they move between quantum dots. These fields increase the power requirements and add noise.

Spin-orbit interaction

In the new work Veldhorst and colleagues looked at Loss’s idea of using holes – quasiparticles created by the absence of electrons – instead of electrons. Like electrons, holes have an energetic coupling to an external magnetic field, giving rise to two distinct energy levels that can create a qubit. However, holes have a much stronger spin-orbit interaction, in which the momentum of the hole is coupled to its spin. Furthermore, this interaction varies between quantum dots.

The team used holes in their germanium quantum dots. By simply applying a static magnetic field of 40 mT and megahertz-frequency electric fields, the researchers were able to execute quantum logic with single-qubit gate fidelities of 99.97% and two-qubit gate fidelities of 99.3%. Their principal problem today is that they cannot control the anisotropy of the quantum dots. Veldhorst explains that researchers normally try to make quantum dots perfect circles, and their next step will be to make them slightly elliptical to introduce this anisotropy. “This is another thing that, on paper, sounds nice, but whether it will be the case we don’t know,” he says; “But I think we’ve opened a new direction of research with a lot of possibilities.”

Theorist Edwin Barnes of Virginia Tech in the US believes the work is, “a pretty major advance in this area of quantum computing. In the past 20 years people have focused on applying microwaves to perform operations on the qubits, rather than doing this hopping which was in Loss and DiVicenzo’s original paper. In this recent [work] they’re showing that this hopping idea works, and that it seems to work extremely well.” He believes the stochastic variation of the magnetization between quantum dots should not prove insurmountable, as “it just means you just have to spend longer characterizing your device before you start using it”.  The next step, he believes, is scaling up.

The research is described in Science and Nature Communications.

The post Spins hop between quantum dots in new quantum processor appeared first on Physics World.

]]>
Research update Hopping-based logic achieved at high fidelity https://physicsworld.com/wp-content/uploads/2024/07/31-07-24-hopping-spins.jpg
Tuberculosis-specific PET tracer could enable more effective treatment https://physicsworld.com/a/tuberculosis-specific-pet-tracer-could-enable-more-effective-treatment/ Wed, 31 Jul 2024 08:45:38 +0000 https://physicsworld.com/?p=115781 FDT-PET scans in animals show potential for tuberculosis diagnosis and treatment monitoring

The post Tuberculosis-specific PET tracer could enable more effective treatment appeared first on Physics World.

]]>
In 2022, more than 10.5 million people fell ill with tuberculosis (TB) and an estimated 1.3 million people died from this curable and preventable disease. TB is currently diagnosed using clinical evaluations and lab tests. While spit (sputum) tests and chest X-rays are common, a physician may also recommend a PET scan.

Current PET scans use radiotracers that could indicate TB or other diseases. A PET scan that targets TB bacteria specifically could enable more effective treatment, say researchers at the Universities of Oxford and Pittsburgh, the Rosalind Franklin Institute and the NIH.

“TB normally requires quite long treatment regimens of many months, and it can be tricky for patients to stay enrolled in these,” explains Benjamin Davis, science director for next generation chemistry at the Rosalind Franklin Institute and a professor of chemical biology at the University of Oxford. “The ability now to readily track the disease and convince a patient to stay engaged and/or for the physician to know that treatment is complete we think could prove very valuable.”

Davis and his team have developed a radiotracer, called FDT (2-[18F]fluoro-2-deoxytrehalose), that is taken up by live tuberculosis bacteria. FDT-PET scans show signal where tuberculosis bacteria are active in a patient’s lungs and measure the metabolic activity of the bacteria, which should decrease as a patient receives treatment.

“We came up with a way of designing a selective sugar for tuberculosis by understanding the enzymology of the cell wall of TB,” Davis says. “Specifically, we target enzymes in the cell wall that modify FDT, so that it effectively embeds itself selectively into the bacterium. If you like, this is a sort of self-painting mechanism where we get the pathogen to use its own enzymes to modify FDT and so ‘paint itself’ with our radiotracer.”

The researchers have characterized FDT and tested the radiotracer in preclinical trials in rabbits and non-human primates. Phase I clinical trials in humans are set to begin in the next year or so. “We are in the process of identifying partners and sites as well as addressing the differing modes of trial registration in corresponding countries,” says Davis.

Another benefit of FDT is that it can be produced from FDG – a common PET radiotracer – without specialist expertise. FDT could thus be a viable option in low- and middle-income countries with less developed healthcare systems, the researchers say, though it does require that a hospital have a PET scanner.

Writing in a press release, Clifton Barry III from the National Institute of Allergy and Infectious Diseases at the NIH, says: “FDT will enable us to assess in real time whether the TB bacteria remain viable in patients who are receiving treatment, rather than having to wait to see whether or not they relapse with active disease. This means FDT could add significant value to clinical trials of new drugs, transforming the way they are tested for use in the clinic.”

The research is published in Nature Communications.

The post Tuberculosis-specific PET tracer could enable more effective treatment appeared first on Physics World.

]]>
Research update FDT-PET scans in animals show potential for tuberculosis diagnosis and treatment monitoring https://physicsworld.com/wp-content/uploads/2024/07/31-07-24-TB-PET.jpg newsletter1
Hannah Stern: how new materials are driving the quantum revolution https://physicsworld.com/a/hannah-stern-how-new-materials-are-driving-the-quantum-revolution/ Tue, 30 Jul 2024 10:00:06 +0000 https://physicsworld.com/?p=115601 Hannah Stern explains why the quantum revolution needs a new toolkit of materials

The post Hannah Stern: how new materials are driving the quantum revolution appeared first on Physics World.

]]>
Every year one early-career researcher who’s made “exceptional contributions to experimental physics” is awarded the Henry Moseley medal and prize from the Institute of Physics (IOP). In 2023 the medal was given to Hannah Stern for her work on understanding the photophysics of semiconductors and 2D materials. As of January, Stern is an assistant professor at the Photon Science Institute at the University of Manchester, UK. She was previously a research fellow at the University of Cambridge, where she also obtained her PhD in 2017.

Stern’s group is developing new platforms for quantum technologies, with a focus on materials with quantum states that can be controlled with light. She recently led research with colleagues in the UK and Australia which showed that lattice point defects in hexagonal boron nitride (hBN) have promising properties for quantum technologies, including sensing and optical networking (Nature Materials 10.1038/s41563-024-01887-z).

Katherine Skipper caught up with Stern to discover what motivates her research, what the big challenges facing the quantum sector are, and what the ethos of her new group will be.

What first sparked your interest in quantum technology?

I’ve always been intrigued by quantum mechanics. When I was an undergraduate at Otago University in New Zealand, I majored in physical chemistry, and I gained experience with spectroscopic techniques. In these experiments, we would shine light at materials and use the data we collected to understand the electronic structure of those materials on the atomic and molecular level. I found this exciting because it felt like I could really see quantum mechanics at work.

I continued with different forms of spectroscopy for my PhD and learnt a lot more about electronic spin states in materials that could be generated via light. I didn’t start thinking about the implications of these systems for quantum technologies until later. As a postdoc, I became interested in states in materials that emit single photons at a time. That was when I realized that the physics I had been studying had applications within quantum technology.

Your research is on light–matter interfaces for quantum technologies. Why is the interaction between light and matter important for quantum applications?

Understanding the interaction between light and matter has always been central to building our understanding of quantum mechanics. Today, light–matter interactions on the single particle level are one of the most useful ways to control and use quantum phenomena for new technologies.

In the first instance, quantum technologies require qubits, which are the quantum equivalent of the classical bit. In simple terms, a qubit is a two-level system that can exist in a quantum superposition state. Qubits can be formed not only from electronic and magnetic transitions within materials but also from single photons. Both form of qubits are being actively developed for different applications.

While light and matter qubits can be used separately for quantum technologies, we can get new functionality by harnessing the interaction between them

For example, photons are useful for applications that require transportation of quantum states (i.e. for communication technologies) because they can be sent long distances without losing quantum information, which may be encoded in the photon polarization, for example. Qubits in matter, however, such as electronic or nuclear spins, are often better at storing information for longer periods of time. Typically, they can also be controlled more easily than photons so they are more suitable for performing logical operations on the device level, for example in computing or simulation.

What this means is that while light and matter qubits can be used separately for quantum technologies, we can get new functionality by harnessing the interaction between them. A good example is “quantum optical networks”. Long-distance optical quantum networks don’t exist yet, but it could enable a quantum version of the internet, in which quantum information is distributed across a series of globally positioned nodes.

These nodes could be material qubits, which would send and receive quantum information from photons and store it locally as a quantum memory. As it turns out, a promising material system for this is atomic-scale point defects in materials, which is what I work on.

What kind of qubits in matter do you study? And how does the interaction with light enable these systems?

Qubits in matter often come in the form of electronic or nuclear spin transitions of atoms or molecules – what are called “spin qubits”. Spin is one of the non-intuitive concepts of quantum mechanics that doesn’t have a classical analogue – it refers to an intrinsic property (angular momentum) of a particle. Importantly, spin is quantized, meaning particles can only exist in a limited number of possible spin states.

What’s interesting about the spin qubits I work with is that the state of the qubit can be controlled with light

The simplest example is the spin state of a single electron, which can be spin up, spin down or a superposition of those two states. Often the spin state of an atom, defect or molecule is slightly more complex – in fact, the materials I work with are an example of this. But it’s typically still possible to think about the spin state in this way.

What’s interesting about the spin qubits I work with is that the state of the qubit can be controlled with light. We first use a laser pulse to place the qubit into a useful spin state before manipulating it with a second electromagnetic pulse and using a second laser pulse to “read-out” the new state. Known as optically detected magnetic resonance, it forms the basis of how we use the spin qubit to store quantum information or sense the environment.

The systems I work with are formed from atomic-scale point defects in an extended solid, in this case a 2D material. You could think of these defects as a missing atom in the lattice. The disruption of the lattice creates a “trapped molecule” in the crystal, which is electronically isolated and has well defined electronic transitions between optical and spin states, forming a qubit.

One appealing feature of defects in solids for quantum applications is that they work at ambient conditions, without needing low temperatures or high magnetic fields. The spin control pulse in our experiments is generally in the microwave or radio range, which is good because we can easily implement microwave or radio frequency control on a chip, and at room temperature. It’s quite hard to find other isolated quantum objects that can be manipulated at ambient conditions.

What are the applications of spin qubits?

In addition to optical networking applications, one important application is in sensing. Most sensors measure a physical quantity over a bulk, or spatially averaged, sample. An exciting feature of sensors based on single electronic spins is that the sensor itself is now atomic scale and can measure physical quantities with very high (near atomic-scale) spatial resolution.

One particularly active area of quantum sensing right now is quantum nanoscale magnetometry

One particularly active area of quantum sensing right now is quantum nanoscale magnetometry, which exploits the fact that an external magnetic field will shift the energy of the electronic spin states of a spin qubit.

By measuring the energy shift of the spin states (via optically detected magnetic resonance), we can work out the magnitude and direction of the field at a precise point in space. So by scanning the qubit over a sample, the nanoscale magnetism of materials can be visualized in a way that has not been possible before.

What kinds of materials have spin properties that can be used in quantum technologies?

Several factors determine if a material will be useful for quantum technologies, and they also depend on the technology you are talking about. Typically, if a material that hosts spin qubits is going to be useful for a quantum application, it needs to have a significant spin coherence time, which is how long a spin can hold on to quantum information.

For sensing, generally the longer the coherence time the more sensitive the sensor. For optical networking, the spin coherence time limits how far the photon can be sent in the network. For both applications, typically a coherence time of at least milliseconds is desirable.

The coherence time of a spin qubit can be limited by a range of environmental factors. For solid-state spin qubits based on defects, the most common limitation is magnetic noise from neighbouring nuclear spins and electronic impurities. This means that the spin needs to be isolated, so we typically use materials with a low abundance of spin-active nuclei. Diamond is well known for this.

You’ve recently investigated the spin properties of defects in hBN: what makes it attractive for quantum applications?

hBN is a 2D material with a wide band gap. Because it’s an insulator, we wouldn’t expect it to interact with visible wavelengths of light, but in about 2016 localized emission of visible light was observed in a hBN lattice (Nature Nanotechnology 11 37).

The idea was put forward that this emission comes from atomic-scale defects in the lattice. By the time I got interested, quite a lot of work had been done to optically characterize the defects, but we didn’t know whether this optical transition could interact with an electronic spin state of the defect.

Lattice of hexagonal boron nitride

At that point, there was the nitrogen-vacancy centre in diamond and a couple of other defects in silicon carbide, but not many other defects in materials that offered optically addressable electronic spin qubits.

In our most recent paper, we’ve shown that there are spin states of these defects that we can initialize, control and then read out using light. And these defects can store quantum information for microseconds at room temperature.

In addition, many spin qubits require a magnetic field to operate, either to initialize the qubit or to separate the spin qubit transitions from other electronic transitions. Here however, we can control the spin without any magnetic field. While the coherence times are still relatively short compared to other systems, this is all being performed under ambient conditions and in a brand new material platform.

This is also the first time spin defects with these properties have been studied in a layered material as opposed to a bulk crystal. 2D materials offer natural advantages for fabrication because they can be easily moved from surface to surface and incorporated with other materials and optical components, such as wave guides or cavities, that we may like to use in future to improve the collection of light from the defects. It can be extremely challenging to couple photons coming from the middle of a crystal to such components.

All of these things are quite exciting because they speak to this type of physics being available in a device that might operate in real world conditions.

What further work is needed to take hBN out of the lab and into real-world devices?

We have a lot to do. We’re interested in technologies that use both the spins and the photons of the hBN defects, and while we’ve made some strides in understanding the spin physics, we’ve got a lot more to understand on the optical properties. A big challenge before us is identifying the chemical structure of the defect. We believe it’s related to carbon inserted in the lattice but we don’t know the exact arrangement of the carbon atoms in the hBN lattice and that’s what we’re working hard to determine in collaboration with theorists.

Defects in the 2D lattice structure of hexagonal boron nitride

The next thing will also be to control the growth of the hBN better and we’re working with collaborators on that. We want to create a defect exactly where we want it in the 2D material. This is a massive challenge but we’re hopeful that the 2D material will enable this.

Further work will also involve working with companies who are interested in this space, eventually moving towards implementing this material in a full device.

The quantum sector has been growing rapidly but what are the biggest challenges in commercializing the technology?

We are at a point now as a community where a lot has been demonstrated in laboratories and many of the existing challenges are about scaling. For quantum computation with solid-state spins, this means scaling to more and more qubits, while for optical networking, it’s about scaling to more and more nodes.

To achieve scalability, we need to interface different materials with each other and interface quantum systems with classical electronics and optical components

There are also challenges related to materials development. To achieve scalability, we need to interface different materials with each other and interface quantum systems with classical electronics and optical components. One part of achieving this that I’m particularly interested in is identifying new quantum systems in new materials that offer advantages or alternative possibilities alongside the existing ones.

We’re seeing that this is an exciting new direction in the field – to look at other material systems and explore these to build a broader material toolkit.

You’ve recently set up your own group in Manchester. What do you want its ethos to be?

The group is multidisciplinary and interdisciplinary and I think that’s important. We cover physics, chemistry, materials science and electronic engineering. Having a lot of different views in the room and scientific backgrounds encourages creativity, and it also contributes to scientific risk-taking. In some respects, the work on defects in hBN was risk-taking at the beginning because we didn’t know what we would find.

I’m also hoping to build a group where anyone from any background, nationality or minority is welcome. I think it’s important the group be broadly inclusive and representative of society. I’ll work hard to make sure it stays that way.

The post Hannah Stern: how new materials are driving the quantum revolution appeared first on Physics World.

]]>
Interview Hannah Stern explains why the quantum revolution needs a new toolkit of materials https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Stern-Hannah.jpg newsletter
Graphene switch combines logic and memory functions in a single device https://physicsworld.com/a/graphene-switch-combines-logic-and-memory-functions-in-a-single-device/ Tue, 30 Jul 2024 08:11:23 +0000 https://physicsworld.com/?p=115765 New device exploits the carbon material’s ability to conduct both electrons and protons

The post Graphene switch combines logic and memory functions in a single device appeared first on Physics World.

]]>
Researchers at Manchester University in the UK have used graphene to make a new electrically-controlled switching device that supports both memory and logic functions. The device, which exploits graphene’s ability to conduct protons as well as electrons, might also be used in applications that involve an electrode-electrolyte interface, such as reducing carbon dioxide to its component chemical species.

Graphene is a two-dimensional sheet of carbon atoms arranged in a honeycomb-like hexagonal lattice. One unique property of graphene is that electrons move freely through the plane of this sheet at almost ballistic speeds, making it a better conductor than metals. Another fascinating property is that when an electric field is applied between the top and the bottom of the sheet, protons from an adjacent polymer or electrolyte will flow through it in a perpendicular direction.

These flows of particles are not independent, however. Some of the protons bind to the electrons, and this binding process, known as hydrogenation, produces defects that scatter and slow the remaining unbound electrons. When the number of bound electrons reaches a threshold, the material turns into an insulator. The material’s conductivity can then be restored by applying an electric field in the plane of the graphene sheet, injecting more electrons into it.

Controlling the movement of the electrons and protons independently

Researchers led by Marcelo Lozada-Hidalgo have now exploited these proton and electron flows to perform logic and memory operations in a single device for the first time. The device consists of a micron-scale graphene layer sandwiched between proton-conducting electrolytes that are connected to gate electrodes on the top and bottom of the device. Additional electrodes placed at the device’s edges induce electrons to flow through the graphene sheet. This arrangement allows the researchers to simultaneously measure the graphene sheet’s in-plane electrical conductivity and the degree to which protons permeate it in the out-of-plane direction.

Set of three images. The first shows the setup of the experiment, the second and third are graphs showing electron and proton transport as a function of n, E and the sum and difference of the top and bottom gate voltages.

Crucially (and unexpectedly, Lozada-Hidalgo says), the two-gate set-up also gave the researchers independent control over proton transport and the electron-proton binding that determines whether graphene is a conductor or an insulator. “We can drive proton transport without hydrogenating graphene or hydrogenate graphene without driving proton transport, or both,” he tells Physics World.

The source of this independent control, he explains, is that both hydrogenation and proton transport depend on the electric field E and the charge density n of the graphene. Using a non-aqueous electrolyte allows both E and n to be extremely high, which effectively distorts the energy profile for these processes. And while E depends on the difference between the top and bottom gate voltages, n depends on their sum, making it possible to tune E and independently simply by altering the voltages. “Such control is impossible otherwise and is so robust and reproducible that we can exploit it to perform proton-based logic-and-memory operations in graphene,” Lozada-Hidalgo says.

A very different computing platform

In the latest study, which is published in Nature, the researchers demonstrated this capability by using the graphene layer’s conducting or insulating status as a “memory” state. At the same time, they used the proton current to perform a logic operation called the exclusive operation (XOR) that outputs a “1” when the number of inputs with a value of 1 is odd, and a “0” otherwise. Hence, when the top and bottom electrode voltages differed, the XOR operation yielded a 1, and a strong proton current flowed – without changing the state of the memory.

The fact that both logic and memory operations occur in the same device is significant, Lozada-Hidalgo says, because these functions are usually performed by separate circuit elements that are physically isolated from each other within a computer. This can lead to long data-transfer times and high power consumption. “Our work could perhaps enable low-cost analogue computing structures that operate on protons,” he says. “At the very least, it is a very different computing platform that does not require silicon and can be implemented in very simple and potentially cheap devices.”

The fact that it uses protons, rather than electrons as in conventional circuits, could also make it possible to couple these devices with biological systems or electrochemical interfaces, he adds.

“A host of application areas”

While the Manchester researchers have so far only demonstrated these processes in graphene, they say that any 2D crystal could be studied in this way. “This represents a great opportunity for investigating electrode-electrolyte interfaces in a large group of materials and over a parameter space that is inaccessible in classical interfaces,” Lozada-Hidalgo says.

As well as memory-logic devices, he adds, the effect could be of interest in “a host of application areas”, including nanofluidics, catalysis, electrochemistry and surface science. “It’s a new technological capability in our discipline,” he says. “The electrochemical processes at play can be linked to the electronic properties of the 2D crystals because they can induce conductor-insulator phase transitions or strongly dope the materials and their heterostructures. We are looking into these possibilities now.”

The post Graphene switch combines logic and memory functions in a single device appeared first on Physics World.

]]>
Research update New device exploits the carbon material’s ability to conduct both electrons and protons https://physicsworld.com/wp-content/uploads/2024/07/graphene-104491904-shutterstock_billdayone-compressed_small.jpg newsletter1
Shifting day-night cloud patterns may be making climate change worse https://physicsworld.com/a/shifting-day-night-cloud-patterns-may-be-making-climate-change-worse/ Mon, 29 Jul 2024 08:00:44 +0000 https://physicsworld.com/?p=115733 Asymmetric effects of surface warming on cloud cover may amplify it, say researchers

The post Shifting day-night cloud patterns may be making climate change worse appeared first on Physics World.

]]>
It is a curious fact of climate science that clouds can both cool the Earth’s surface and keep it warm. This apparent paradox occurs because during the day, clouds reflect shortwave sunlight back into space thanks to the albedo effect, whereas at night they act like blankets, trapping longwave radiation close to the surface.

Researchers from Sun Yat-sen University in China and Leipzig University in Germany have now found that as the climate changes, cloud cover – especially in the lower atmosphere – decreases more during the day than at night. This asymmetry, they say, could contribute to a feedback spiral that makes planet warmer still.

“Our findings show that there is an even greater need to reduce greenhouse gases,” explains Johannes Quaas, the meteorologist who led this study, “because not only does cloud cover respond to warming, it also amplifies warming through this new effect.”

Daily variations

Other studies have previously shown that nighttime temperatures are increasing faster than daytime ones. The reason for this is not yet clear, however, as several feedback processes (including changes in cloud cover, atmospheric humidity, soil moisture and aerosol emissions) may be contributing to it.

Photo of team leader Johannes Quaas, in a blue shirt, in a dense forest

In their work, which they detail in Science Advances, Quaas and colleagues studied how daily variations in cloud cover affect climate. The diurnal asymmetry in cloud cover they identified could stem from several factors, but the main one is that rising concentrations of greenhouse gases have made the lowest layer of the atmosphere (known as the lower troposphere) more stable. “This enhanced stability is having a negative effect overall,” says Quaas. “Less clouds are forming during day (so reducing their sunlight-reflecting effect), but they are remaining more stable at night (so increasing their blanket effect, in relative terms).”

The researchers obtained their results by analysing satellite observations and data from the sixth phase of the Coupled Model Intercomparison Project (CMIP6), which incorporates both historical data collected between 1970 and 2014 and projections up to the year 2100. “We found some day-night differences and started to wonder whether there might be a systematic effect in a warming climate,” Quaas says.

The fact that these differences had not been studied before, he adds, may be due to the way climate researchers tend to analyse data. “When looking at global climate, one typically uses one time of day, or averages over time and analyses the geographical distribution,” Quaas explains. “Climate models for their output, and satellite observations, thus choose one time zone for the entire globe – typically UTC (Greenwich time). It took us the extra step to convert this to local time to see our result. Not a very elaborate idea, admittedly, but one that turned out to be revealing.”

While the asymmetry effect is not huge, it is systematic, he tells Physics World. “Effects such as those on solar energy are not going to be overwhelming, but are an important supplement to global warming,” he says.

The researchers now want to explore other factors – beyond the changes in atmospheric stability that they have identified – that may also be driving changes to cloud cover. “One such reason could be deforestation, for example,” Quaas suggests.

The post Shifting day-night cloud patterns may be making climate change worse appeared first on Physics World.

]]>
Research update Asymmetric effects of surface warming on cloud cover may amplify it, say researchers https://physicsworld.com/wp-content/uploads/2024/07/blue-sky-and-sun-9506774-iStock_ELyrae.jpg
X(3960) is a tetraquark, theoretical analysis suggests https://physicsworld.com/a/x3960-is-a-tetraquark-theoretical-analysis-suggests/ Sun, 28 Jul 2024 14:03:01 +0000 https://physicsworld.com/?p=115738 Calculations reproduce particle’s mass and lifetime

The post X(3960) is a tetraquark, theoretical analysis suggests appeared first on Physics World.

]]>
A theoretical study has confirmed that a particle observed at CERN’s LHCb experiment in 2022 is indeed a tetraquark – supporting earlier hypotheses that were based on the analysis of its observed decay products. Tetraquarks comprise four quarks and do not fit into the conventional classification of hadrons, which defines only mesons (quark and an antiquark) and baryons (three quarks). Tetraquarks are of great interest to particle physicists because their  exotic nature provides opportunities to deepen our understanding of the intricate physics of the strong interactions that bind quarks together in hadrons.

“X(3960) is a new hadron discovered at the Large Hadron Collider (LHC),” Bing-Dong Wan of Liaoning Normal University and Hangzhou Institute for Advanced Study, and the author of the study, tells Physics World. “Since 2003, many new hadrons have been discovered in experiments, and some of them appear to be tetraquarks, while only a few can be confirmed as such.”

Named for its mass of 3.96 GeV – about four times that of a proton – X(3960) stands out, even amongst exotic hadrons. Its decay into D mesons containing heavy charm quarks implies that X(3960) should contain charm quarks. The details of the interaction of charm quarks with other strongly interacting particles is rather poorly understood, making X(3960) interesting to study.  Additionally, by the standards of unstable strongly interacting particles, X(3960) has a long lifetime – around 10-23 s – indicating unique underlying quark dynamics.

These intriguing properties of X(3960) led Wan to investigate its structure theoretically to determine if it is a tetraquark or not. In a recent paper in Nuclear Physics B, he describes how he used Shifman-Vainshtein-Zakharov sum rules in this calculations. This approach examines strongly interacting particles by relating their properties to those of their constituent quarks and the gluons that bind them together. The dynamics of these constituents can be accurately described by the fundamental theory of strong interactions known as quantum chromodynamics (QCD).

Wan assumed that the X(3960) is composed of a strange quark, a charm quark and their antiparticles. Using the sum rules, he derived its mass and the lifetime to compare these parameters with the observed values.

Mathematical machinery

Using the mathematical machinery of QCD and extensive numerical simulations, he found that the mass of the tetraquark he formulated is 3.98 ± 0.06 GeV. This is a close match to the measured mass of X(3960) at 3.956±0,005 GeV. This confirms that X(3960) comprises a strange quark, a charm quark and their antiparticles. Furthermore, Wan was able to compute the lifetime of his model particle to be 1.389±0.889×10−23 s, which aligns well with the observed value of (1.53−0.26+0.41)×10−23 s, further validating his identification.

While Wan’s work strongly supports the hypothesis that X(3960) is a charm–strange tetraquark, he acknowledges that it is not conclusive proof. In the subatomic world, particles can transform into others and the match of the quark composition of the tetraquark he studied and the decay products of X(3960) is not enough, Indeed, in principle, X(3960) can be even better described by some other quark composition.

“There are many possible structures for tetraquarks, and my work finds that one possible structure can explain the properties of X(3960),” says Wan. “But some other researchers may be able to explain the properties of X(3960) using different quark structures.”

To further validate his approach, Wan applied the sum rule technique to a particle similar to X(3960), called X(4140), previously discovered at the Tevatron collider. His calculations yielded mass and lifetime values very close to the measured ones, further confirming his method’s accuracy.

However, to definitively determine the structure of X(3960), further theoretical and experimental studies are needed. Analysing a larger number of decay events will help reduce measurement errors. On the theoretical side, using the sum rules or other QCD techniques to more accurately analyse these parameters will help reduce computational uncertainties.

“Studying new hadrons may greatly enrich the hadron family and our knowledge of the nature of strong interactions,” Wan concludes. “It is highly expected that we are now at the dawn of enormous discoveries of novel hadronic structures, implying a renaissance in hadron physics.”

The post X(3960) is a tetraquark, theoretical analysis suggests appeared first on Physics World.

]]>
Research update Calculations reproduce particle’s mass and lifetime https://physicsworld.com/wp-content/uploads/2024/07/28-07-24-tetraquark.jpg newsletter1
Physicist Rosemary Fowler honoured 75 years after discovering the kaon particle https://physicsworld.com/a/physicist-rosemary-fowler-honoured-75-years-after-discovering-the-kaon-particle/ Sat, 27 Jul 2024 09:00:16 +0000 https://physicsworld.com/?p=115741 Fowler left academia during her PhD but has now been awarded an honorary doctorate from the University of Bristol

The post Physicist Rosemary Fowler honoured 75 years after discovering the kaon particle appeared first on Physics World.

]]>
The physicist Rosemary Fowler has had to wait three quarters of a century to be honoured for her role in discovering a subatomic particle.

Fowler was doing a PhD at the University of Bristol in 1948 under the supervision of physicist Cecil Powell when she stumbled upon the particle.

The then 22-year-old physicist spotted unusual particle tracks in photographic emulsions that had been exposed to cosmic rays at high altitude in Switzerland.

She discovered a particle that decayed into three pions and labelled the track ‘k’, with the particle now known as the K-meson or “kaon”.

“I knew at once that it was new and would be very important,” Fowler noted. “We were seeing things that hadn’t been seen before – that’s what research in particle physics was. It was very exciting.”

The results were published in two papers in Nature with Fowler (née Brown) as first author. She then decided to leave university and married fellow Bristol physicist Peter Fowler – the grandson of Ernest Rutherford – in 1949. They had three children, all of whom went on to study science. Peter died in 1996.

This week Fowler, who is 98, was finally honoured for her work. She received an honorary doctorate from Bristol University in a private graduation ceremony held near her Cambridge home.

Fowler said she felt “very honoured” by the doctorate, but added humbly that she hadn’t “done anything since to deserve special respect”.

The post Physicist Rosemary Fowler honoured 75 years after discovering the kaon particle appeared first on Physics World.

]]>
Blog Fowler left academia during her PhD but has now been awarded an honorary doctorate from the University of Bristol https://physicsworld.com/wp-content/uploads/2024/07/Rosemary-Fowler-with-daughter-Mary-Fowler-2.jpg newsletter
How do electromagnetic waves carry information about objects they interact with? https://physicsworld.com/a/how-do-electromagnetic-waves-carry-information-about-objects-they-interact-with/ Fri, 26 Jul 2024 16:00:12 +0000 https://physicsworld.com/?p=115731 Theorists develop new continuity equation to help them visualize the process

The post How do electromagnetic waves carry information about objects they interact with? appeared first on Physics World.

]]>
As electromagnetic waves travel, they collect information about their environment. This property is widely exploited in a host of applications that rely on waves being deflected, scattered or reflected off their surroundings, but it comes with a challenge: how can we extract as much of this information as possible?

Researchers in Austria and France have now developed a new mathematical formalism that may help answer this question. The information the wave carries about its environment is known as Fisher information, and the new formalism makes it possible to visualize how waves collect this information from objects they interact with as they travel.

“The basic idea is quite simple: you send a wave at an object and the part of the wave that is scattered back from the object is measured by a detector,” explains theoretical physicist and study lead Stefan Rotter from TU Wien. “The data can then be used to learn something about the object – for example, its precise position, speed or size.”

Continuity effects

Rotter and colleagues found that a wave’s Fisher information matches a “continuity equation”, meaning that the information contained in the wave is preserved as it propagates, and obeys laws not dissimilar to the laws of energy conservation. This continuity equation allowed the researchers to calculate precisely where in the wave the information is actually located. They also discovered that different types of information on the properties of an object (such as its position, speed and size) are carried in different portions of the wave and that this information depends, with high precision, on how strongly the wave is affected by specific object properties.

“For example, if we want to measure whether an object is a little further to the left or a little further to the right, then the Fisher information is carried precisely by the part of the wave that comes into contact with the right and left edges of the object,” says TU Wien team member Jakob Hüpfl. “This information then spreads out, and the more of this information reaches the detector, the more precisely the position of the object can be read from it.”

Experimental tests in microwaves

The researchers tested their theory experimentally with help from collaborators in Ulrich Kuhl’s group at the University of Côte d’Azur in Nice. “We suggested that Felix Russo, a new masters student in our group, would be interested in carrying out the experimental part of the work and Kuhl agreed to host and supervise him,” says Rotter.

Russo began by sending microwaves through a structure made up of several randomly-positioned Teflon objects and a single metallic rectangle. The aim of the experiment was to determine the position of this rectangle by analysing data received at a detector on the other side of the structure.

“By precisely measuring the microwave field, it was possible to show exactly how the information about the horizontal and vertical position of the rectangle spreads: it emanates from the respective edges of the rectangle and then moves along with the wave – without any information being lost,” Russo says.

Applications in different fields

Rotter says the team’s formalism opens up a new way of thinking about how waves retrieve information from their environment. “Since this is such a widely used concept – ranging from seismology to biomedical imaging and from radar technology to quantum sensing – I expect our results to be useful in quite a broad range of different fields,” he tells Physics World.

The Vienna-Nice researchers are now working on extending their theory to multi-parameter sensing protocols and applying it to specific experimental settings. “Our goal is to demonstrate the advantages that can be gained in terms of designing an experiment and in improving the resulting measurement precision by using our technique,” Rotter says.

The present study is detailed in Nature Physics.

The post How do electromagnetic waves carry information about objects they interact with? appeared first on Physics World.

]]>
Research update Theorists develop new continuity equation to help them visualize the process https://physicsworld.com/wp-content/uploads/2024/07/Low-Res_Fisherinformation-Graphik-Rotter.jpeg newsletter1
Ask me anything: Andrew Weld: ‘You’re probably going to be working for over 40 years, so make sure you enjoy what you do’ https://physicsworld.com/a/ask-me-anything-andrew-weld-youre-probably-going-to-be-working-for-over-40-years-so-make-sure-you-enjoy-what-you-do/ Fri, 26 Jul 2024 13:00:44 +0000 https://physicsworld.com/?p=115630 Andrew Weld is the head of research and development at QLM technologies

The post Ask me anything: Andrew Weld: ‘You’re probably going to be working for over 40 years, so make sure you enjoy what you do’ appeared first on Physics World.

]]>
Andrew Weld

What skills do you use every day in your job?

In terms of physics, I manage most of our research and development projects, so I have to be able to give a technical overview of our work and explain how everything fits within the company’s goals.

In a small start-up, those goals and priorities change rapidly. This can be based on feedback from customers, but when you’re trying to get a new product to market there are always going to be fires to put out and technical issues to solve, whether that’s hardware, software or component supply.

Also, while the technical aspects of my job are important, I need to have a good feel for time and budget, and be adaptable and flexible as circumstances change.

For example, we recently had an issue where some of our components came back from the supplier and weren’t working correctly. In the long term, you can consider other suppliers or you can work with your current supplier to find a solution, but you also need to fix the immediate problem.

We found that we could change the way we were driving the component, which meant the performance wasn’t optimal but we could still sell the products at a concession or use them for demonstrations. Managing those kinds of trade-offs is a big part of bringing a product to market.

The other important thing is people skills – I’ve found that just a little bit of diplomacy goes a long way. Working out how to handle people is quite a skill, and it’s something that not everyone has.

What do you like least and best about your job?

My favourite aspect of working at QLM is that I believe we are on a worthwhile mission. We want our technology to be part of the solution to climate change, and that provides an extra bit of motivation.

I also get to work with highly skilled scientists and engineers to develop our products and push the boundaries of our technology.  It’s also great to be part of the industry community and represent QLM at industry events.

The other thing I enjoy is problem-solving. This is still a fundamental part of my job, but in my current role I will often delegate rather than tackling the problem myself. This can be less satisfying but I have realized that sometimes it’s more effective to use your experience to steer things in the right direction, rather than doing the frontline work.

QLM’s lidar systems have a lot of components that need to function well together. So I’ll often use my experience to guide junior staff, who will perform labour-intensive stages of assembling, programming and testing of our hardware.

What do you know today you wish you knew when you were starting your career?

You have to decide whether you’re happy in your job. Sometimes the best thing to do is to recognize that if something’s not working out for you, it’s time to change.

It’s easy to let yourself stagnate if you keep thinking “I’m sure this is going to change soon”. That’s something that we’re trying very hard to not let happen at QLM but it’s an issue I’ve seen elsewhere. We try to support our staff to develop new skills and to grow with the company, and we seek feedback from them to make sure they are happy in their work.

I look back and think I should have been a bit more ambitious, instead of staying in the lab doing the technical work. I would probably go back and tell myself: “You know what? You recognize that you’ve hit the limit of what you can achieve in that role”. At QLM I enjoy being part of the senior leadership team, where I have responsibility both for developing the hardware and shaping the evolution of the company.

So my advice would be, don’t be afraid to push for personal development with your line managers. If that doesn’t work, you might need to look elsewhere to put your career first. You’re probably going to be working for over 40 years, so make sure you enjoy what you do.

The post Ask me anything: Andrew Weld: ‘You’re probably going to be working for over 40 years, so make sure you enjoy what you do’ appeared first on Physics World.

]]>
Interview Andrew Weld is the head of research and development at QLM technologies https://physicsworld.com/wp-content/uploads/2024/07/2024-07-AMA-Andrew-Weld-listings.png newsletter
US plasma physicists propose construction of a ‘flexible’ stellarator facility https://physicsworld.com/a/us-plasma-physicists-propose-construction-of-a-flexible-stellarator-facility/ Fri, 26 Jul 2024 07:39:01 +0000 https://physicsworld.com/?p=115668 The Flexible Stellarator Physics Facility would test different approaches to stellarator magnetic confinement

The post US plasma physicists propose construction of a ‘flexible’ stellarator facility appeared first on Physics World.

]]>
A group of 24 plasma physicists has called for the construction of a stellarator fusion facility in the US. The so-called Flexible Stellarator Physics Facility would test different approaches to stellarator confinement and whether some of the designs could be scaled up to a fusion plant.

Tokamak and stellarator fusion devices both emerged in the early 1950s. They use magnetic confinement to manipulate plasmas but they differ in the containment vessels’ geometries to confine the plasma. Tokamaks use toroidal and poloidal magnetic fields that are generated by magnets and the electric current that flows through the plasma, while stellarators apply a helical magnetic field, produced by external coils.

Those different geometries give each approach a specific advantage. Tokamaks maintain the plasma temperature more effectively while stellarators do a better job of ensuring the plasma’s stability.

The ITER fusion reactor, currently being built in Cadarache, France, is the largest and most ambitious of the roughly 60 tokamak experiments worldwide. Yet there are only a handful of stellarators operational, the most notable being Germany’s Wendelstein 7-X device, which switched on in 2015 and has since achieved significant experimental advances.

The authors of the white paper write that delivering the “ambitious” US decadal strategy for commercial fusion energy, which was released in 2022, will require “a persuasive” stellarator programme in addition to supporting tokamak advances.

Tokamaks and stellarators “are very close relatives, with many aspects in common,” says Felix Parra Diaz, who is the lead author of the white paper, “physics discoveries that benefit one are usually of interest to the other.”

Yet Parra Diaz, who is head of theory at the Princeton Plasma Physics Laboratory and carries out research on both tokamaks and stellarators, told Physics World that recent advances, especially at Wendelstein 7-X, are propelling the stellarator device as the best route to a fusion power plant.

“Stellarators were widely considered to be difficult to build due to their complex magnets,” says Parra Diaz. “We now think that it is possible to design stellarators with similar or even better confinement than tokamaks. We also believe that it is possible to construct these devices at a reasonable cost due to new magnet designs.”

Multi-stage process

The white paper calls on the US to build a “flexible facility” that would test the validity of theoretical models that suggest where stellarator confinement can be improved and also where it fails.

The design will focus on “scientific gaps” on the path to stellarator fusion. One particular target is the demonstration of “quasi-symmetry” magnetic configurations, which the paper describes as “the most promising strategy to minimize both neoclassical losses and energetic particle transport.”

The authors of the white paper propose a two-stage approach to the new facility. The first stage would involve exploring a range of flexible magnetic configurations while the second would involve upgrading the heating and power systems to further investigate some of the promising configurations from the first stage.

“It will also serve as a testbed for methods to control how the hot fusion plasma interacts with the walls of stellarator pilot plants,” adds Parra Diaz, who says that designing and building such a device could take between 6 to 9 years depending on “the level of funding”.

The move by the group comes as significant delays push back the international ITER fusion reactor’s switch-on to 2034, almost a decade later than the previous “baseline”.

At the same time alternative tokamak technologies continue to emerge from commercial fusion firms. Tokamak Energy of Abingdon, Oxfordshire, for example, is developing a spherical tokamak design that, the company claims, “is more efficient than the traditional ring donut shape.”

The post US plasma physicists propose construction of a ‘flexible’ stellarator facility appeared first on Physics World.

]]>
News The Flexible Stellarator Physics Facility would test different approaches to stellarator magnetic confinement https://physicsworld.com/wp-content/uploads/2024/07/torus_w7x_2021-small.jpg newsletter
Zap Energy targets fusion power without magnets, Claudia de Rham on the beauty of gravity https://physicsworld.com/a/zap-energy-targets-fusion-power-without-magnets-claudia-de-rham-on-the-beauty-of-gravity/ Thu, 25 Jul 2024 15:09:35 +0000 https://physicsworld.com/?p=115727 In this podcast we chat about small-scale fusion and a familiar yet mysterious force

The post Zap Energy targets fusion power without magnets, Claudia de Rham on the beauty of gravity appeared first on Physics World.

]]>
Our first guest in this episode of the Physics World Weekly podcast is Derek Sutherland, who is head of FuZE-Q physics at the US-based company Zap Energy. He explains how the US-based firm is designing a fusion system that does not rely on magnets, cryogenics or high-powered lasers to generate energy. We also chat about the small-scale fusion industry in general, and about career opportunities for physicists in the sector.

This episode also features an interview with theoretical physicist and author Claudia de Rham. She talks to Physics World’s Matin Durrani about her new popular-science book The Beauty of Falling. They also chat about her research, which addresses a range of fundamental problems associated with gravity – from quantum to cosmological scales.

Sponsor logo

 

This episode is supported by Pfeiffer Vacuum. The company provides all types of vacuum equipment, including hybrid and magnetically-levitated turbopumps, leak detectors and analysis equipment, as well as vacuum chambers and systems. You can explore all of its products on the Pfeiffer Vacuum website.

The post Zap Energy targets fusion power without magnets, Claudia de Rham on the beauty of gravity appeared first on Physics World.

]]>
Podcasts In this podcast we chat about small-scale fusion and a familiar yet mysterious force https://physicsworld.com/wp-content/uploads/2024/07/25-07-25-Claudia-and-Derek.jpg newsletter
Gianluigi Botton: maintaining the Diamond synchrotron’s cutting edge https://physicsworld.com/a/gianluigi-botton-maintaining-the-diamond-synchrotrons-cutting-edge/ Thu, 25 Jul 2024 09:00:46 +0000 https://physicsworld.com/?p=115647 Gianluigi Botton, head of the Diamond Light Source in the UK, on the £519m Diamond-II upgrade and what it means for the national synchrotron science facility

The post Gianluigi Botton: maintaining the Diamond synchrotron’s cutting edge appeared first on Physics World.

]]>
What is the Diamond Light Source?

The Diamond Light Source, which opened in 2007, is a 3 GeV synchrotron that provides intense beams of light that are used by researchers to study a wide range of materials. Diamond serves a user community of around 14,000 scientists working across all manner of fundamental and applied disciplines – from clean-energy technologies to pharma and healthcare; from food science to structural biology and cultural heritage.

And now you are planning a major upgrade, Diamond-II – what does that involve?

Diamond-II will consolidate our position as a world-leading facility, ensuring that we continue to attract the best researchers working in the physical sciences, life sciences and industrial R&D. At £519m, it’s an ambitious programme that will add three new beamlines – taking the total to some 35 – along with a comprehensive series of upgrades to the optics, detectors, sample environments, sample-delivery systems and computing resources across Diamond’s existing beamlines. Users will also benefit from new automation tools to enhance our beamline instrumentation and downstream data analysis.

What is the current status of Diamond-II?

Right now, we are in the planning and design phase of the project, although initial calls for proposals and specifications for core platform technologies have been put out to tender with industry suppliers. We will shut down the synchrotron in December 2027, with the bulk of the upgrade activity completed in summer 2029. From there, we will slowly ramp back up to fully operational by mid-2030, although some beamlines will be back online sooner.

What roles are other advanced light sources playing in Diamond-II?

Even though synchrotron facilities are effectively in competition with each other – to host the best scientists and to enable the best science – what always impresses me is that collaboration and partnership are hard-wired into our community model. At a very basic level, this enables phased scheduling of the Diamond-II upgrade in co-ordination with other large-scale facilities – mainly to avoid several light sources going dark simultaneously.

How is this achieved?

Diamond’s  Machine Advisory Committee – which comprises technical experts from other synchrotron facilities – plays an important networking role in this regard, while also providing external challenge, sense-check and guidance when it comes to review and iteration of our technical ambitions. In the same way, we have engaged extensively with our user community – comprising some 14,000 scientists – over the past decade to ensure that the users’ priorities underpin the Diamond-II science case and, ultimately, that we deliver a next-generation facility with analytical instruments that meet their future research needs.

You’ve been chief executive officer of Diamond since October 2023. What does your typical day look like?

Every day is different at Diamond – always exciting, sometimes exhausting but never dull. At the outset, my number-one priority was to engage broadly with our key stakeholders – staff teams, the user community and the funding agencies – to build a picture of how things work and fit together. That engagement is ongoing, with a large part of the working day spent in meetings with division directors, senior managers and project leaders from across the organization. The task is to figure out what’s working, what isn’t and then quickly address any blockers.

I want to hear what our people really think; not what they think I want to hear

How do you approach this?

Alongside those formal meetings, I try to be visible and available whenever possible. Ad hoc visits to the beamlines and the control room, for example, mean that I can meet our scientists, technicians and support staff. It’s important for staff to have the opportunity to talk candidly and unfiltered with the chief executive officer so that I can understand the everyday issues arising. I want to hear what our people really think; not what they think I want to hear.

How do you attract and retain a diverse workforce?

Diamond is a highly competitive research facility and, by extension, we are able to recruit on a global basis. Diversity is our strength: the best talent, a range of backgrounds, plus in-depth scientific, technical and engineering experience. Ultimately, what excites many of our scientists and engineers is the opportunity to work at the cutting edge of synchrotron science, collaborating with external users and translating their research objectives into realistic experiments that will deliver results on the Diamond beamlines. One of my priorities as chief executive officer is to nurture and tap into that excitement, creating a research environment where all our people feel valued and can see how their individual contribution is making a difference.

How is Diamond optimizing its engagement with industry?

Industry users account for around 5% of beamtime at Diamond – and we co-ordinate that effort on multiple levels. To provide strategic direction, there’s the Diamond Industrial Science Committee, with senior scientists drawn from a range of industries advising on long-term applied research requirements. At an operational level, we have the Industrial Liaison Office, a multidisciplinary team of in-house scientists who work closely with industrial researchers to address R&D problems across diverse applications – from drug discovery and catalysis to aerospace and automotive.

The Diamond Light Source

What about equipment manufacturers?

Our scientists and engineers also maintain ongoing collaborations with equipment manufacturers – in many cases, co-developing custom technologies and instrumentation to support our infrastructure and research capability. Those relationships are a win-win, with Diamond’s leading-edge requirements often shaping manufacturers’ broader product development roadmaps.   

Has Brexit had any impact on Diamond?

While Diamond’s relationship with Europe’s big-science community took a hit in the aftermath of Brexit, we are proactively rebuilding those bridges. Front-and-centre in this effort is our engagement in the League of European Accelerator-based Photon Sources (LEAPS), a strategic consortium initiated by the directors of Europe’s synchrotron and free-electron laser facilities. Working together, LEAPS provides a unified voice and advocacy for big science – engaging with funders at national and European level – to ensure that our scientists feel more valued, not only in terms of career pathways and progression, but also financial remuneration.

Is the future bright for synchrotron science?

We need big science to tackle humanity’s biggest challenges in areas such as health, medicine, energy, agriculture and sustainability. These grand challenges are a team effort, so the future is all about collaboration and co-ordination – not just between Europe’s advanced light sources, but other large-scale research facilities as well. To this end, Diamond has been, and remains, a catalyst in bringing together the global light sources community through the work of Lightsources.org.

The post Gianluigi Botton: maintaining the Diamond synchrotron’s cutting edge appeared first on Physics World.

]]>
Interview Gianluigi Botton, head of the Diamond Light Source in the UK, on the £519m Diamond-II upgrade and what it means for the national synchrotron science facility https://physicsworld.com/wp-content/uploads/2024/07/2024-07-22-Gianluigi-Botton.jpg newsletter
Sun-like stars seen orbiting hidden neutron stars https://physicsworld.com/a/sun-like-stars-seen-orbiting-hidden-neutron-stars/ Wed, 24 Jul 2024 14:05:38 +0000 https://physicsworld.com/?p=115699 Astronomers are puzzled how wide orbits survived supernovae

The post Sun-like stars seen orbiting hidden neutron stars appeared first on Physics World.

]]>
Astronomers have found strong evidence that 21 Sun-like stars orbit neutron stars without losing any mass to their binary companions.  Led by Kareem El-Badry at the California Institute of Technology, the international team spotted the binary systems in data taken by ESA’s Gaia satellite. The research offers new insights into how binary systems evolve after massive stars explode as supernovae. And like many scientific discoveries, the observations raise new questions for astronomers.

Neutron stars are created when massive stars reach the end of their lives and explode in dramatic supernovae, leaving behind dark and dense cores. So far, over 99% of the neutron stars discovered in the Milky Way have been solitary – but in some rare cases, they do exist in binary systems with Sun-like companion stars.

In every one of these previously discovered systems, the neutron star’s powerful gravitation field is ripping gas from its companion star. The gas is heated to extreme temperatures as it accretes onto the neutron star, causing it to shine with X-rays or other radiation.

No accretion

However, as El-Badry explains, “it has long been expected that there should be similar binaries of neutron stars and normal stars in which there is no accretion. Such binaries are harder to find because they produce no X-rays.”

Seeking these more elusive binaries, El-Badry’s team scoured data from the ESA’s Gaia space observatory, which measures the positions, distances, and motions of stars with high precision.

The astronomers looked for Sun-like stars that “wobbled” in the sky as they orbited invisible companions. By measuring the wobble, they could then calculate the size and period of the orbit as well as the masses of both objects in the binary system.

El-Badry explains that Gaia is best at discovering binaries with widely separated orbits. “Gaia monitors more than a billion stars, giving us good chances of finding even very rare objects,”

Gravitational influence

In total, Gaia’s data revealed 21 cases where Sun-like, main sequence stars appear to be orbiting around unseen neutron star companions, without losing any material. If this interpretation is correct, it would be the first time that neutron stars have been discovered purely as a result of their gravitational influence.

The researchers predict that these systems are likely to evolve in the future when their Sun-like stars approach the end of their lives. “When the [Sun-like] stars evolve and become red giants, they will expand and begin transferring mass to the neutron stars, so these systems are progenitors of X-ray binaries,” El-Badry says.

On top of this, the sizes of the orbits observed by team could provide clues about the magnitudes of the supernovae that formed the neutron stars. “The wide orbits of the binaries in our sample would get unbound if the neutron stars had received significant kicks during the supernovae from which they are born,” El-Badry explains. “These objects imply that some neutron stars form with weak kicks.”

Could be white dwarfs

The team’s results throw up some important questions about the nature of these binaries and how they formed. For now, it remains possible that the unseen companions could be white dwarfs. These are the remnants of relatively small stars like the Sun – stars that have exhausted their nuclear fuel and then fade out, rather than exploding to form neutron stars.

If a companion is indeed a neutron star, its progenitor star would have experienced a red supergiant phase before going supernova. This would have created an envelope of gas large enough to affect the binary system. It is not clear why the two stars would not have drawn much closer together or even merged during this phase. Later, when the larger star exploded, it is not clear why the two objects did not go their separate ways.

El-Badry’s team hope that future studies of Gaia data could answer these challenging questions and explain how these curious binary systems form and evolve.

The observations are described in The Open Journal of Astrophysics.

The post Sun-like stars seen orbiting hidden neutron stars appeared first on Physics World.

]]>
Research update Astronomers are puzzled how wide orbits survived supernovae https://physicsworld.com/wp-content/uploads/2024/07/24-7-24-Sun-like-star-and-neutron-star.jpg
Could humans run on water? https://physicsworld.com/a/could-humans-run-on-water/ Wed, 24 Jul 2024 09:00:02 +0000 https://physicsworld.com/?p=115684 Scientists have investigated whether we could mimic basilisk lizards, on Earth or elsewhere

The post Could humans run on water? appeared first on Physics World.

]]>
With the 2024 Paris Olympics just days away, sports fans are braced to see who will run, jump, row, fight and dance themselves into the history books. One of the most exciting moments will be the 100 m sprint finals, when athletes compete to become the fastest man or woman on Earth.

Over the years we have seen jaw-dropping performances from the likes of Usain Bolt and Florence Griffith-Joyner. Scientists have been captivated by top sprinters – trying to understand how physique, technique and nutritional intake can help athletes push the limits of human ability. In this episode of the Physics World Stories podcast, we tackle the more speculative question: could an Olympic-level athlete ever run on water?

Grappling with this question is our guest Nicole Sharp, engineer and science communicator specializing in fluid dynamics. She runs the fluid dynamics blog FYFD and authored the recent Physics World feature “Could athletes mimic basilisk lizards and turn water-running into an Olympic sport?“. Basilisk lizards are famed for their ability to skitter across water surfaces, usually to escape predators.

It won’t surprise you to know that scientists have already grappled with this question. For instance, a team in Italy studied whether it was possible in reduced gravity conditions equivalent to the Moon. Sadly, a water race on the Moon is unlikely due to the absence of pools of liquid on the lunar surface.

One place that could provide the setting for a liquid sprint are the ethane and methane lakes on Saturn’s moon Titan. These are the only large stable bodies of surface liquid in our solar system found outside Earth. If such an event were to happen tomorrow, perhaps the gold medal favourite would be US sprinter Sha’Carri Richardson – the current 100 m world champion who weighs just 45 kg.

Listen to the podcast to discover whether Richardson would sprint or sink at the inaugural Titan Olympics.

The post Could humans run on water? appeared first on Physics World.

]]>
Scientists have investigated whether we could mimic basilisk lizards, on Earth or elsewhere Scientists have investigated whether we could mimic basilisk lizards, on Earth or elsewhere Physics World Could humans run on water? full false 27:41 Podcasts Scientists have investigated whether we could mimic basilisk lizards, on Earth or elsewhere https://physicsworld.com/wp-content/uploads/2024/07/Water-running-541984452-iStock_Choreograph_home.jpg newsletter
Portable camera expands the applications of gamma imaging https://physicsworld.com/a/portable-camera-expands-the-applications-of-gamma-imaging/ Wed, 24 Jul 2024 08:00:24 +0000 https://physicsworld.com/?p=115646 The Seracam portable gamma camera could extend the use of nuclear medicine investigations beyond the constraints of larger fixed gamma camera systems

The post Portable camera expands the applications of gamma imaging appeared first on Physics World.

]]>
Gamma imaging is a nuclear medicine technique employed in over 100 different diagnostic procedures. Also known as scintigraphy, the approach uses gamma cameras to image the distribution of gamma-emitting radiopharmaceuticals administered to the body, with applications including thyroid imaging, tumour imaging, and lung and renal studies.

Most clinical gamma cameras are large devices designed for whole-body scanning and located in their own dedicated room. While such systems offer high sensitivity and a large field-of-view (FOV), they are not ideal for patients who cannot attend the nuclear medicine department. A small gamma camera, on the other hand, could enable scanning of more patients in far more scenarios.

Made to meet this challenge, Seracam is a new portable gamma camera developed and designed by UK medical imaging company Serac Imaging Systems. Just 15 cm in diameter, 24 cm long and weighing 5 kg, the hybrid optical–gamma camera is designed for small-organ imaging within outpatient clinics, intensive care units, or even in operating theatres during surgery.

“Gamma imaging is no longer confined to the nuclear medicine department,” explains Sarah Bugby from Loughborough University. “The system has a much smaller footprint, it could be stored in a cupboard and brought out only when needed.”

Bugby and colleagues have now performed a detailed assessment of Seracam’s clinical potential, reporting their findings in EJNMMI Physics.

System evaluation

Seracam uses a CsI(Tl) crystal scintillator to convert incoming gamma photons to optical photons. This light is then captured by a 25.5 x 25.5 mm detector, divided into 245 x 245 pixels, and analysed in real time to create an image of the gamma counts.

The compact Seracam

The camera integrates four pinhole collimators, with pinhole diameters of roughly 1, 2, 3 and 5 mm. The physics of collimation means that smaller pinhole diameters provide better spatial resolution but lower sensitivity, while larger pinholes provide higher sensitivity but with a trade-off in image resolution.

“With Seracam, you can change the collimator at the press of a button in just a second or so,” Bugby explains. “In traditional gamma cameras, collimators must be changed manually, which is time consuming as they’re about 60 cm square and made of lead. This means that, when imaging a patient, you’re locked into the initial collimator choice. Seracam offers the flexibility to adjust these trade-offs on the fly.”

Another novel feature is Seracam’s hybrid gamma and optical imaging. A gamma image simply comprises bright spots on a dark background, there are no anatomical landmarks for context. Seracam overlays the optical and gamma images to show both gamma and anatomical information.

Bugby and colleagues evaluated Seracam in a series of performance tests using 99mTc – the most common isotope used in nuclear medicine. Measurements of parameters including spatial resolution, sensitivity and image uniformity demonstrated that the device is suitable for clinical use.

Clinical scenarios

Next, the team performed experimental simulations of two clinical scenarios: thyroid imaging, used to assess the function of thyroid tissue, nodules and tumours; and a gastric emptying study, used to time stomach emptying after ingestion of a radiolabelled meal.

For thyroid imaging, the researchers examined a Picker phantom, an acrylic block with a thyroid-shaped well filled with 99mTc. They also imaged a head-and-neck phantom with fillable head and thyroid volumes, simulating hyperthyroidism and a normal thyroid with a hot nodule.

Seracam produced good quality images for both phantoms, showing its suitability for thyroid scintigraphy. The researchers note that the Picker phantom image quality was similar to that achieved with a traditional large-FOV camera, but using significantly lower counts and a larger imaging distance than employed in a clinical setting.

Hyperthyroidism gamma imaging simulation

For gastric emptying, the team simulated a human stomach using a 500 ml flask filled with 99mTc and gradually emptied via syringes. At each emptying step, Seracam acquired a 120 s image using the 5.00 mm pinhole. The simulation showed that Seracam could produce a gastric emptying curve with the expected linearity at clinically relevant activities.

“We intentionally chose challenging rather than best-case scenarios, so the fact that we saw good performance is a really strong indicator,” says Bugby. “Gastric emptying isn’t a small-organ scenario, so in this case Seracam outperformed our expectations. Of course, this all needs to be validated in the clinic.”

The team concludes that Seracam can provide effective small-FOV gamma imaging within a clinical setting with excellent spatial resolution, although with reduced sensitivity compared with large-FOV devices. “Our results show that Seracam is well suited for the kinds of clinical tests it was designed for,” Bugby points out.

Seracam’s small camera head can be positioned in places that larger camera heads on conventional systems cannot reach. This flexibility in positioning could itself improve image quality: moving the camera closer to the patient helps compensate for the sensitivity that’s sacrificed by making such a small device.

“Combining this manoeuvrability with other beneficial features unavailable in large FOV systems, such as hybrid gamma–optical imaging and instant collimator changes, opens up new approaches to imaging,” says Bugby. “It’s easy to imagine a scenario that begins in a high-sensitivity ‘survey mode’ before switching to a high-resolution ‘imaging mode’ to investigate identified uptake sites. We’re excited to see how experienced clinicians will take advantage of these novel features.”

The researchers are now simulating other clinical applications, such as sentinel lymph node biopsy, and Seracam is being trialled at clinical sites in the US and Malaysia. Co-funded by the UK’s innovation agency, Innovate UK, Loughborough University researchers are also working on new image analysis and display techniques to enable Seracam’s use in radioguided surgery.

“We’re hopeful that these new innovations will be trialled by a team at the University of Malaya Medical Centre and others very soon,” Bugby tells Physics World.

The post Portable camera expands the applications of gamma imaging appeared first on Physics World.

]]>
Research update The Seracam portable gamma camera could extend the use of nuclear medicine investigations beyond the constraints of larger fixed gamma camera systems https://physicsworld.com/wp-content/uploads/2024/07/24-07-24-gamma-camera.jpg newsletter1
Cosmic-ray physics: detector advances open up the ultrahigh-energy frontier https://physicsworld.com/a/cosmic-ray-physics-detector-advances-open-up-the-ultrahigh-energy-frontier/ Tue, 23 Jul 2024 15:32:53 +0000 https://physicsworld.com/?p=115638 Gearing up in the search for ultrahigh-energy cosmic rays

The post Cosmic-ray physics: detector advances open up the ultrahigh-energy frontier appeared first on Physics World.

]]>
Physics at the extremes provides the raison d’être for the JEM-EUSO research collaboration – or, in long form, the Joint Exploratory Missions for Extreme Universe Space Observatory. Over the past 20 years or so, more than 300 scientists from 16 countries have been working collectively towards the JEM-EUSO end-game: the realization of a space-based, super-wide-field telescope that will scan the night sky and enable astrophysicists to understand the origin and nature of ultrahigh-energy cosmic rays (upwards of 5 x 1019 eV). In other words, JEM-EUSO promises to open a unique window into the Universe at energy regimes far beyond the current generation of man-made particle accelerators.

Looking at the sky from above

For context, cosmic rays are extraterrestrial particles comprising hydrogen nuclei (around 90% of the total) and helium nuclei (roughly 9%), with the remainder made up of heavier nuclei and electrons. Their energy range varies from about 109 to 1020 eV and beyond, while their flux is similarly spread across many orders of magnitude – ranging from 1 particle/m2 per second at low energies (around 10eV) out to roughly 1 particle/km2 per century at extreme energies (around 1020 eV).

While it’s possible to detect cosmic rays directly at low-to-intermediate energies (up to 1015 eV), the flux of particles is so low at higher energies that indirect detection is necessary. In short, that means observing the interaction of cosmic rays (as well as neutrino decays) with the outer layers of the atmosphere, where they produce cascades of subatomic particles known as “extensive air showers” (or secondary cosmic rays).

With this in mind, the JEM-EUSO collaboration has rolled out an ambitious R&D programme over the past decade, with a series of pathfinder experiments geared towards technology validation ahead of future space-based missions (aboard orbiting satellites) to observe cosmic rays at their highest energies. Projects to date include ground-based installations like the EUSO-TA (deployed at the Telescope Array site in Utah, US); various stratospheric balloons (the most recent of which is EUSO-SPB2); and MINI-EUSO (Multiwavelength Imaging New Instrument for the Extreme Universe Space Observatory), a telescope that’s been observing the Earth from inside the International Space Station (ISS) since 2019.

Marco Casolino, co-principal investigator on JEM-EUSO, and members of the Mini-EUSO development team with their assembled detector module

All of these experiments operate at night and require clear weather conditions, surveying regions of the sky with low-artificial-light backgrounds. In each case, the instruments in question monitor Earth’s atmosphere by measuring the fluorescence emissions and Cherenkov light produced by extensive air showers. The fluorescence originates from the relaxation of nitrogen molecules excited by their interaction with charged particles in the air showers, while ultrahigh-energy particles traveling faster than light in the air create a blue flash of Cherenkov light (like the sonic boom created by an aircraft exceeding the speed of sound).

Operationally, because those two light components exhibit different durations – of the order of microseconds for fluorescence light; a few nanoseconds for Cherenkov light – they require dedicated detectors and acquisition electronics: multi-anode photomultiplier tubes (MAPMTs) for fluorescence detection and silicon photomultipliers (SiPMs) for the Cherenkov detectors.

The win-win of technology partnership

So where do things stand with JEM-EUSO’s implementation of current- and next-generation detectors? Among the programme’s core technology partners in this regard is Hamamatsu Photonics, a Japanese optoelectronics manufacturer that operates across diverse industrial, scientific, and medical markets. It’s a long-standing collaboration, with Hamamatsu engineers co-developing and supplying MAPMT and SiPM solutions for various JEM-EUSO experiments.

“We have a close working relationship with Hamamatsu’s technical staff in Italy and, through them, a direct line to the product development team in Japan,” explains Marco Casolino, a research director at the National Institute of Nuclear Physics (INFN), Structure of Rome “Tor Vegata”, and the co-principal investigator on JEM-EUSO (as well as project leader for Mini-EUSO).

The use of MAPMTs is well established within JEM-EUSO for indirect detection of ultrahigh-energy cosmic rays via fluorescence (with the focal surface of JEM-EUSO fluorescence telescopes fabricated from MAPMTs). “Yet although MAPMTs are a volume product line for Hamamatsu,” Casolino adds, “the solutions we employ [for JEM-EUSO experiments] are tailored by its design engineers to our exacting specifications, with an almost artisanal level of craftmanship at times to ensure we get the best product possible for our applications.”

That same approach and attention to detail regarding JEM-EUSO’s evolving requirements also guide the R&D partnership around SiPM technology. Hamamatsu engineers are working to maximize the advantages of the SiPM platform, including significantly lower operating voltage (versus MAPMTs), lightweight and durable structure, and compatibility with magnetic fields. Another plus is that SiPMs are immune to excessive levels of incident light, although the cumulative advantages are offset to a degree by the strong influence of temperature on SiPM detection efficiency (and the consequent need for active compensation schemes).

Massimo Aversa

Currently, JEM-EUSO scientists are focused on an exhaustive programme of test, measurement and calibration to optimize their large-scale SiPM detector designs – considering geometry, weight, packaging and robustness – for mass-critical applications in satellite-based instrumentation. “We are being very cautious with the next steps because SiPM technology has never been tested in space using very large detector arrays [e.g. 1024 x 1024 pixels],” explains Casolino. “Ultimately, it’s all about technical readiness level – ensuring that SiPM modules can handle the harsh environment in open space and the daily swings in temperature and radiation levels for the duration of a three- or four-year mission.”

Those priorities for continuous improvement are echoed by Massimo Aversa, senior product manager for MAPMT and SiPM product lines in Hamamatsu’s Rome division. “Our collaboration with JEM-EUSO is a win-win,” he concludes. “On the one hand, we are working to develop higher-resolution SiPM detector arrays with enhanced radiation-hardness – products that can be deployed for observations in space over extended timeframes. By extension, the lessons we learn here are transferable and will inform Hamamatsu’s SiPM development roadmap for diverse applications in high-energy physics.”

Further reading

Simon Bacholle et al. 2021 Mini-EUSO mission to study Earth UV emissions on board the ISS ApJS 253 36

Francesca Bisconti 2023 Use of silicon photomultipliers in the detectors of the JEM-EUSO program Instruments 7 55

SiPMs: versatile by design

The SiPM – also known as a Multi-Pixel Photon Counter (MPPC) – is a solid-state photomultiplier comprised of a high-density matrix of avalanche photodiodes operating in Geiger mode (such that a single electron–hole pair generated by absorption of a photon can trigger a strong “avalanche” effect). In this way, the technology provides the basis of an optical sensing platform that’s ideally suited to single-photon counting and other ultralow-light applications at wavelengths ranging from the vacuum-ultraviolet through the visible to the near-infrared.

Hamamatsu, for its part, currently supplies commercial SiPM solutions to a range of established and emerging applications spanning academic research (e.g. quantum computing and quantum communication experiments); nuclear medicine (e.g. positron emission tomography); hygiene monitoring in food production facilities; as well as light detection and ranging (LiDAR) systems for autonomous vehicles. Other customers include instrumentation OEMs specializing in areas such as fluorescence microscopy and scanning laser ophthalmoscopy.

Near term, Hamamatsu is also focused on emerging applications in astroparticle physics and gamma-ray astronomy, while further down the line there’s the promise of at-scale SiPM deployment within particle accelerator facilities like CERN, KEK and Fermilab.

Taken together, what underpins these diverse use-cases is the SiPM’s unique specification sheet, combining high photon detection efficiency with ruggedness, resistance to excess light and immunity to magnetic fields.

 

The post Cosmic-ray physics: detector advances open up the ultrahigh-energy frontier appeared first on Physics World.

]]>
Analysis Gearing up in the search for ultrahigh-energy cosmic rays https://physicsworld.com/wp-content/uploads/2024/07/web-mini-euso-asi.jpg newsletter
Primordial black holes contain very little dark matter, say astronomers https://physicsworld.com/a/primordial-black-holes-contain-very-little-dark-matter-say-astronomers/ Tue, 23 Jul 2024 14:00:20 +0000 https://physicsworld.com/?p=115654 Theories of the early universe may have to be revised following new analyses of data from the Optical Gravitational Lensing Experiment

The post Primordial black holes contain very little dark matter, say astronomers appeared first on Physics World.

]]>
When the gravitational wave detectors LIGO and VIRGO observed signals from merging black holes with masses much higher than those of black holes that form from the collapse of stars, scientists were intrigued. Had these unusually massive black holes formed when the universe was very young? And might they contain large amounts of dark matter?

According to new analyses of 20 years of data from the Optical Gravitational Lensing Experiment (OGLE) survey, the answer to the second question is a firm “no”. At most, members of the survey say that these cosmological structures contain only few percent of the universe’s dark matter – the mysterious substance that emits no light and can only be detected thanks to its gravitational pull, but is nevertheless thought to make up 95% of all matter. Indeed, the survey results casts doubt on the very existence of early-origin black holes, sending researchers back to the drawing board for explanations.

A different origin for some black holes?

Since the first detection of gravitational waves from a pair of merging black holes in 2015, LIGO and VIRGO have spotted more than 90 such events. These black holes are 20 to 100 times more massive than our Sun, making them four to five times more massive than any black hole previously detected within our Milky Way galaxy.

One possible explanation for why the universe might contain different masses of black holes was put forward by the Soviet physicists Yakov Zeldovich and Igor Novikov in 1966, and independently by the British physicist Stephen Hawking in 1971. They proposed that some black holes could have formed in the very early universe, before the first stars appeared. The mechanisms that created these so-called “primordial” black holes would be different from those that produce black holes via stellar collapse, which would remove some constraints on their masses.

Gravitational microlensing

In the OGLE survey, a team led by Andrzej Udalski of the Astronomical Observatory of the University of Warsaw, Poland, analysed light from nearly 80 million stars in the Large Magellanic Cloud, a nearby satellite of our own Milky Way. Their goal was to find characteristic brightenings of stars due to an effect known as microlensing. This effect occurs when a massive object (either dark matter or normal matter) passes between an observer on Earth and a source of light in such a way that the three objects line up almost perfectly.

At that point, Udalski explains, Einstein’s general theory of relativity means the intermediate object “can act as a lens – it gravitationally bends the light so more rays arrive to the observer and the source star is brighter,” he says. “The changes of brightness are very characteristic and very rare, but they can be detected.”

The duration of this brightening depends on the mass of the lensing object: the heavier it is, the longer the event. Microlensing events involving solar-mass objects typically last several weeks, whereas those that feature black holes 100 more massive than the Sun would last a few years.

Only 13 events detected

Previous gravitational microlensing dark-matter surveys (including the US-led MACHO and French EROS as well as OGLE) indicated that black holes lighter than one solar mass comprise less than 10% of dark matter. However, these observations were not directly sensitive to extremely long-timescale microlensing events, so they were not sensitive to massive black holes like those recently observed via gravitational waves.

To address this gap, Udalski and colleagues re-analysed their data. If black holes of 10 solar masses made up all the dark matter in our cosmic neighbourhood, they calculated that OGLE should have detected 258 microlensing events. If dark matter was instead composed of 100- or 1000-solar-mass black holes, the survey should have yielded 99 or 27 microlensing events, respectively. “During the 20 years of our experiment we should have registered over 500 microlensing events,” Udalski tells Physics World. “In reality, however, we detected only 13.”

The explanation, he says, is simple: the dark matter in the galactic “halo” between the Milky Way and the Large Magellanic Cloud cannot contain the kind of primordial black holes that would cause microlensing events. “All of the 13 events we registered can be nicely explained as events caused by nearby galactic stars or stars in the Large Magellanic Cloud located somewhat in front of that galaxy,” Udalski says. “This indicates that massive black holes can compose at most a few percent of dark matter.”

More specifically, the team’s calculations revealed that black holes of 10 solar masses may comprise at most 1.2% of dark matter. For 100-solar-mass black holes, the number increases to 3%; for 1000-solar-mass black holes, it is 11%.

An “important impact” on astrophysics and cosmology

Udalski thinks the team’s findings will have an “important impact” on astrophysics and cosmology. “The first conclusion is that there is no empirical proof that primordial black holes ever existed,” he explains. “Because it is supposed that they were formed in the very early universe, our picture of the beginnings must be revised. Secondly, as dark matter does not contain classical bodies that can cause microlensing events, it still remains a mystery what this matter is.”

While OGLE in its present form is nearly finished, the team still plan to investigate putative black holes with very low, planet-scale masses. “This will allow us to fully exploit the potential of the microlensing observations of the Large Magellanic Cloud,” Udalski says.

The study’s results are detailed in Astrophysical Journal Supplement Series and Nature.

The post Primordial black holes contain very little dark matter, say astronomers appeared first on Physics World.

]]>
Research update Theories of the early universe may have to be revised following new analyses of data from the Optical Gravitational Lensing Experiment https://physicsworld.com/wp-content/uploads/2024/07/Low-Res_lmc-microlensing-scheme-star-3500px.jpg newsletter1
Angels & Demons, Tom Hanks and Peter Higgs: how CERN sold its story to the world https://physicsworld.com/a/angels-demons-tom-hanks-and-peter-higgs-how-cern-sold-its-story-to-the-world/ Tue, 23 Jul 2024 10:00:58 +0000 https://physicsworld.com/?p=115417 James Gillies recalls the drama of his time as head of communications at CERN, which turns 70 this year

The post <em>Angels & Demons</em>, Tom Hanks and Peter Higgs: how CERN sold its story to the world appeared first on Physics World.

]]>
“Read this,” said my boss as he dropped a book on my desk sometime in the middle of the year 2000. As a dutiful staff writer at CERN, I ploughed my way through the chunky novel, which was about someone stealing a quarter of a gram of antimatter from CERN to blow up the Vatican. It seemed a preposterous story but my gut told me it might put the lab in a bad light. So when the book’s sales failed to take off, all of us in CERN’s communications group breathed a sigh of relief.

Little did I know that Dan Brown’s Angels & Demons would set the tone for much of my subsequent career. Soon after I finished the book, my boss left CERN and I became head of communications. I was now in charge of managing public relations for the Geneva-based lab and ensuring that CERN’s activities and functions were understood across the world.

I was to remain in the role for 13 eventful years that saw Angels & Demons return with a vengeance; killer black holes maraud the tabloids; apparently superluminal neutrinos have the brakes applied; and the start-up, breakdown and restart of the Large Hadron Collider (LHC). Oh, and the small business of a major discovery and the award of the Nobel Prize for Physics to François Englert and Peter Higgs in 2013.

Fear, black holes and social media

Back in 2000 the Large Electron-Positron collider, which had been CERN’s flagship facility since 1989, was reaching the end of its life. Fermilab was gearing up to give its mighty Tevatron one more crack at discovering the Higgs boson, and social media was just over the horizon. Communications teams everywhere struggled to work out how to adapt to this new-fangled phenomenon, which was giving a new platform to an old emotion.

Fear of the new is as old as humanity, so it’s not surprising that some people were nervous about big machines like the Tevatron, the Relativistic Heavy Ion Collider and the LHC. One individual had long been claiming that such devices would create “strangelets”, mini-black holes and other supposedly dangerous phenomena that, they said, would engulf the world. Before the Web, and certainly before social media, theirs was a voice in the wilderness. But social media gave them a platform and the tabloid media could not resist.

James Gillies

For the CERN comms team, it became almost a full-time job pointing out that the LHC was a minnow compared to the energies generated by the cosmos. All we were doing was bringing natural phenomena into the laboratory where they could be easily studied, as I wrote in Physics World at the time. Perhaps the Nobel-prize-winning physicist Sam Ting was right to switch his efforts from the terrestrial cacophony to the quiet of space, where his Alpha Magnetic Spectrometer on the International Space Station observes the colossal energies of the universe at first hand.

Despite our best efforts, the black-hole myth steadily grew. At CERN open days, we arranged public discussions on the subject for those who did not know quite what to make of it. Most people seemed to realize that it was no more than a myth. The British tabloid newspaper the Sun, for example, playfully reminded readers to cancel their subscriptions before LHC switch-on day.

There were lawsuits, death threats and calls for CERN to be shut down

But some still took it seriously. There were lawsuits, death threats and calls for CERN to be shut down. There were reports of schools being closed on start-up day so that children could be with their parents if the world really did end. Worse still, in 2005 the BBC made a drama documentary End Day, seemingly inspired by Martin Rees’s book Our Final Century. The film played out a number of calamitous scenarios for humankind, culminating with humanity taking on Pascal’s wager and losing. I have read the book. That is not what Rees was saying.

We were now faced with another worry. Brown’s follow-up book, The Da Vinci Code, had become a blockbuster and it was clear that Angels & Demons, after its slow start, would follow suit. I therefore found myself in a somewhat surreal meeting with CERN’s then director-general (DG) Robert Aymar mulling over how CERN should respond. I suggested that the book’s success was a great opportunity for us to talk about the real physics of antimatter, which is anyway far more interesting than the novel.

To my relief, Aymar agreed – and in 2005 visitors to CERN’s website were greeted with a picture of our top-secret space plane that the DG uses to hop around the world in minutes. Or does he? Anyone clicking on the picture would discover that CERN doesn’t actually have a space plane, but we do make antimatter. We could even make a quarter of a gram of it, given 250 million years.

More importantly, we hoped that visitors to the website would learn that the really interesting thing about antimatter is that nature seems to favour matter and we still don’t know why. They’d also discover that antimatter plays an important role in medicine, in the form of positron-emission tomography (PET) scanners, and that CERN has long played an important part in their development.

Thanks to our playful, interactive approach, many people did click through. In fact, CERN’s Web traffic jumped by a factor of 10 almost overnight. The lab was on its way to becoming a household name and, in time, a synonym for excellence. In 2005, however, that was yet to come. We still had several years of black-hole myth-busting ahead.

Collider countdown

A couple of years later, an unexpected ally appeared in the form of Hollywood, which came knocking to ask if we’d be comfortable working with them on a film version of Angels & Demons. Again, the DG agreed and in 2009 the film appeared, starring Tom Hanks, along with Ayelet Zurer as a brilliant female physicist who saves the day. Fortunately, much of the book’s dodgy science and misrepresentation of CERN didn’t make it onto the screen (see box below).

Of course, the angels, the demons and the black holes were all a distraction from CERN’s main thrust – launching the LHC. By 2008 Fermilab’s Tevatron was well into its second run, but the elusive Higgs boson remained undiscovered. The mass range available for it was increasingly constrained and particle physicists knew that if the Tevatron didn’t find it, the LHC would (assuming the Higgs existed). The stakes were high, and a date was set to thread the first beams around the LHC. First Beam Day would be 10 September 2008.

Angels & Demons: when Hollywood came to CERN

Tom Hanks, Ayelet Zurer and Ron Howard in front of the Globe at CERN

Dan Brown’s 2000 mystery thriller Angels & Demons is a race against the clock to stop antimatter stolen from CERN from blowing up the Vatican. Despite initial slow sales, the book eventually proved so successful that it was turned into a 2009 movie of the same name, directed by Ron Howard. He visited CERN more than once and I was impressed by his wish to avoid the book’s shaky science.

In the movie version, which stars Tom Hanks and Ayelet Zurer, CERN is confined to the pre-opening title sequence, with the ATLAS cavern reconstructed in CGI. Howard’s team even gave me a watermarked script and asked for feedback on the science. Howard also made a short film about CERN for the movie’s Blu-ray release. Ahead of that event, we found ourselves fielding calls from Howard’s office at all times of day and night about the science.

The movie was officially launched at CERN to the entertainment press, with Howard, Hanks and Zurer in attendance, who all gushed what an amazing place the lab is. Handled by Sony Pictures, the event proved much more tightly controlled than typical CERN gatherings, with Sony closely vetting which science journalists we’d invited. My colleague Rolf Landua and I ended up having dinner with Hanks, Zurer and Howard – something I could never have imagined happening when Angels & Demons first came out.

Any big new particle accelerator is its own prototype. Switching such a machine on is best done in peace and quiet, away from the media glare. But CERN’s new standing on the world’s stage, coupled with the still-present black-hole myth, dictated otherwise. Media outlets started contacting us – not to ask if they could come for the switch-on, but to tell us they would be there. Outside the CERN fence if necessary.

Another surreal conversation with the DG ensued. Media were coming, I told him, whether we liked it or not. Lots of them. We could either make plans to invite them in and allow them to follow the attempts to get beams around the LHC, or we could have them outside the lab reporting that CERN was starting the doomsday machine in secrecy behind the fence.

Around 1000 media professionals representing some 350 outlets descended on the lab

The DG agreed that it might be better to let them in, and so we did. Around 1000 media professionals representing some 350 outlets descended on the lab. Among them was a team from BBC Radio 4. Some months earlier, a producer called Sasha Feachem had rung CERN to say she’d been trying to persuade her boss, Mark Damazer, to do a full day’s outside broadcast from CERN, and would I come to London to convince him.

I tried, and in an oak-panelled room at Broadcasting House, failed completely to do so. But Damazer did accept an invitation to visit CERN. After hitting it off with the DG, Radio 4’s Big Bang Day was approved and an up-and-coming science presenter by the name of Brian Cox was chosen to anchor the BBC’s coverage. It was the first time a media team had ever broadcast wall-to-wall from a science lab and I don’t think Radio 4 has done anything like it since.

Journalists were accredited. A media centre was set up. Late-coming reporters were found places in the CERN main auditorium where they could watch a live feed from the control room, along with the physicists. We even installed openable windows in the conference room overlooking the control room so that TV crews could get clean shots of the action below.

The CERN Control Centre packed with staff and press

A time was set early that September morning for the first attempt at beam injection into the LHC, and the journalists were all in place. Then there was a glitch, and the timing was put back a couple of hours. Project leader Lyn Evans had agreed to give a countdown, and when the conditions for injection were back, he began. A dot appeared on a screen indicating that a proton beam had been injected.

After an agonising wait, a second dot appeared, indicating that the beam had gone round the 27 km-long machine once. There were tears and laughter, and the journalists who were parked in the auditorium with the physicists later said they’d had the best seats in the house. They were able to witness the magnitude of that moment alongside those whose lives it was about to change.

It was an exhausting but brilliant day. On my way home, I ran into Evans as he was driving out of the lab. He rolled down his window and said: “Just another day at the office, eh James!” Everyone was on top of the world. Media coverage was massive and positive, with many of those present telling us how refreshing it was to take part in something so clearly genuine in a world where much is hidden.

From joy to disaster

The joy proved short lived. The LHC has something like 10,000 high-current superconducting interconnects. One was not perfect, so it had a bit of resistance, which led to an electrical arc that released helium into the cryostat with enough force to knock several magnets off their stands. Nine days after switch-on, CERN suddenly had a huge and unexpected repair job on its hands.

The Higgs boson was still nowhere in sight. The Tevatron was still running and the painstaking task began of working out what had gone wrong at the LHC. CERN not only had to repair the damaged section, but also understand why it had happened and ensure it wouldn’t happen again. Other potentially imperfect interconnects had to be identified and remade. The machine also had to be equipped with systems that would release pressure should helium gas build up inside the cryostat.

Close-up of damage to superconducting magnet

My mantra throughout this period was that CERN had to be honest, open, trustworthy and timely in all communications – an approach that, I think, paid dividends. The media were kind to us, capturing the pioneering nature of our research and admiring the culture of an organization that sought not to attribute blame, but to learn and move on.

When beams were back in the LHC in November 2009, they cheered us on. By the end of the year, the first data had been recorded. LHC running began in earnest in 2010, and with the world clearly still in place, the black-hole myth gave way to excitement about a potential major discovery. The Tevatron collided its last beams in September 2010, leaving the field clear for the LHC.

As time progressed, hints of something began to appear in the data, and by 2012 there was a palpable sense of expectation

As time progressed, hints of something began to appear in the data, and by 2012 there was a palpable sense of expectation. A Higgs Update Seminar was arranged at CERN for 4 July – the last day possible for the spokespeople of the LHC’s ATLAS and CMS experiments to be at CERN before heading to Melbourne for the 2012 International Conference on High-Energy Physics, which is always a highlight in particle physicists’ calendars.

Gerry Guralnik and Carl Hagan – early pioneers of spontaneous symmetry breaking – asked whether they could attend the CERN seminar, so we thought we’d better invite Peter Higgs and François Englert too. (Robert Brout, who had been co-author on Englert’s 1964 paper in Physical Review Letters (13 321) predicting what we now called the Brout–Englert–Higgs mechanism, had died in 2011.) Right up to the last minute, we didn’t know if we’d be making a discovery announcement, or just saying “Watch this space.” One person, however, did decide that he’d be able to say, “I think we have it.”

As DG since 2009, Rolf-Dieter Heuer had seen the results of both experiments, and was convinced that even if neither could announce the discovery individually, the combined data were sufficient. On the evening of 3 July 2012, as I left my office, which was next to the CERN main auditorium, I had to step over people laying out sleeping bags in the corridor to guarantee their places in the room the next day.

CERN auditorium full of people clapping and cheering

As it turned out, both experiments had strong enough measurements to make a positive statement on the day, though the language was still cautious. The physicists talked simply about “the discovery of a new particle with features consistent with those of the Higgs boson predicted by the Standard Model of particle physics”. Higgs and Englert heard the news seated side by side, Higgs famously wiping a tear from his eye and saying that it was remarkable that the discovery had been made in his lifetime.

The media were present in force, and everyone wanted to talk to the theorists. It’s a sign of the kind of person Higgs was that he told them they’d have plenty of opportunity to talk to him later, but that today was a day to celebrate the experimentalists.

Nature versus nature

The Higgs discovery was undoubtedly the highlight of my career in communications at CERN, but the Higgs boson is just one aspect of CERN’s research programme. I could tell you about the incredible precision achieved by the LHCb experiment, seeking deviations from the Standard Model in very rare decays. I could talk about the discovery of a range of composite particles predicted by theory. Or about the insights brought by a mind-boggling range of research at low energies, from antimatter to climate change.

Then there is CERN’s neutrino programme. It’s now focused on the US long baseline project, but it brought its own headaches to the communications team when muon neutrinos from CERN’s Super Proton Synchrotron appeared to be arriving at the Gran Sasso Laboratory in Italy faster than the speed of light.

“Have you checked all the cables?” said one of our directors to the scientists involved, in a meeting in the DG’s office. “Of course,” they insisted. As it turned out, there had been a false reading – not strictly speaking from a poorly chosen cable, but a faulty fibre-optic connection. The laws of physics were safe. Unfortunately, this was not before a seminar was held in the CERN Main Auditorium in September 2011.

Had they held the seminar at Gran Sasso, I’m sure they’d have got less coverage. Our approach was to say: “This is how science works – you get a measurement that you don’t understand, and you put yourself up to scrutiny from your peers.” It led to a memorable editorial in Nature (484 287) entitled “No shame“, which concluded that “Scientists are not afraid to question the big ideas. They are not afraid to open themselves to public scrutiny. And they should not be afraid to be wrong.”

Nature caught us off guard, not once but twice, when animals brought low the world’s mightiest machine

That remark in Nature was a positive outcome for CERN from a potentially embarrassing episode, but nature of another kind caught us off guard, not once but twice, when animals brought low the world’s mightiest machine. First, breadcrumbs and feathers led us to believe that a bird had had a lucky escape when it tripped an electrical substation. Later, a pine marten, which also caused a power outage after gnawing through a live cable, was not so lucky. It has now joined the gallery of animals that have met unusual ends in the Rotterdam Museum of Natural History.

A coiled wire sculpture hanging from a ceiling

There were also visitors. Endless visitors, from school children to politicians and from pop stars to artists. On a return visit of my own to Antony Gormley’s London studio after having given him a tour of CERN, he spontaneously presented me with one of his pieces. Feeling Material XXXIV – a metal sculpture that’s part of a series giving an impression of the artist’s body – now hangs proudly in CERN’s main building

There was an incredible moment at one of the TEDxCERN events we organized when Will.i.am joined two local children’s choirs for a rendition of his song “Reach for the Stars”. And there were many visits from the late landscape architect Charles Jencks and Lily Jencks who produced a marvellously intelligent design for a new visitor centre in the form of a cosmic Ouroboros – like a snake biting its own tail, it appeared like two mirror-image question marks forming a circle. One of my only regrets is that we were unable to fund its construction.

For a physicist-turned-science-communicator such as myself, there was no better place to be than at my desk through the opening years of the 21st century. CERN is a unique and remarkable institution that shows what humanity is capable of when differences are cast aside, and we focus on what we have in common. To paraphrase Charles Jencks, to whom I’m leaving the last word, CERN is perhaps the last bastion of the enlightenment.

The post <em>Angels & Demons</em>, Tom Hanks and Peter Higgs: how CERN sold its story to the world appeared first on Physics World.

]]>
Feature James Gillies recalls the drama of his time as head of communications at CERN, which turns 70 this year https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Gillies-Hanks-0902030_03.jpg newsletter
Fluorescent dye helps reveal the secrets of ocean circulation https://physicsworld.com/a/fluorescent-dye-helps-reveal-the-secrets-of-ocean-circulation/ Mon, 22 Jul 2024 12:45:37 +0000 https://physicsworld.com/?p=115634 For the first time, researchers have directly measured the upwelling of cold, deep water towards the ocean surface

The post Fluorescent dye helps reveal the secrets of ocean circulation appeared first on Physics World.

]]>
Seawater located more than 2 km below the ocean’s surface drives the oceanic circulation that helps regulate the Earth’s climate. At these depths, turbulent mixing drives water towards the surface in a process called upwelling. How quickly this upwelling happens dictates how carbon and heat from the ocean are exchanged with the atmosphere.

It has been difficult to directly test how carbon storage in the ocean is controlled by deep-sea mixing processes. But with the help of a non-toxic fluorescein dye, a research team headed up at UC San Diego’s Scripps Institution of Oceanography has now directly measured cold, deep-water upwelling along the slope of a submarine canyon in the Atlantic Ocean.

Oceanic circulation a key phenomenon

The Earth relies on large-scale ocean circulation – known as conveyor belt circulation – to maintain balance. In this process, seawater becomes cold and dense near the poles and sinks into deep oceans, eventually rising back up elsewhere and becoming warm again. The cycle is then repeated. This natural mechanism helps to maintain a regular turnover of heat, nutrients and carbon, which underpins marine ecosystems and the natural ability to mitigate human-driven climate change. However, the return of cold water from the deep ocean to the surface via upwelling has been difficult to measure.

Back in the 1960s, oceanographer Walter Munk predicted that the average speed of upwelling was 1 cm/day. But while upwelling at this speed would transport large volumes of water, directly measuring this rate across entire oceans is not feasible. Munk also suggested that upwelling was caused by turbulent mixing from internal waves breaking under the ocean’s surface; but more modern-day measurements have shown that turbulence is the highest near the seafloor.

This created a paradox: if turbulence is highest at the seafloor, it would push cold water down instead of up, making the bottom waters colder and denser. But it has been confirmed in the field that the deep ocean is not completely filled with cold and dense water from the poles.

Direct evidence of diapycnal upwelling

In the last few years, a new theory has surfaced. Namely, that it is the steep slopes on the ocean’s seafloor (such as the walls of underwater canyons) that are responsible for the turbulent mixing that causes upwelling, because they provide the right type of turbulence to move water upwards.

Fluorescent dye in a barrel

To investigate this theory, first author Bethan Wynne-Cattanach and colleagues used a fluorescein dye to investigate the upwelling across isopycnals (layers of constant density). The research was conducted off the coast of Ireland at a 2000 m-deep canyon in the Rockall Trough.

The researchers released over 200 l of fluorescein dye at 10 m above the canyon floor, which had a local temperature of 3.53 °C. They used a fastCTD (FCTD) rapid profiler housing a fluorometer (with resolution down to the parts per billion range) to investigate how the dye moved at depths as low as 2200 m. The FCTD also carried a micro-conductivity probe to assess the dissipation rate of temperature variance – a key metric for determining turbulent mixing.

The team tracked the dye for 2.5 days. During this time, the dye’s movements showed a turbulence-driven bottom-focused diapycnal (across the isopycnals) upwelling along the slope of the canyon. They found that the flow was much faster than the original estimates, with measurements showing that the upwelling occurred at a rate of around 100 m per day. The researchers note that this first direct measurement of upwelling and its rapid speed, combined with measurements of downwelling in other parts of the oceans, suggests that there are upwelling hotspots.

The overarching conclusion of the study is that mixing of ocean waters at topographic features – such as canyons – leads to a globally significant upwelling, and that upwelling within canyons could have a more significant role in overturning deep water than previously thought. Given the numbers of submarine canyons across the globe, it’s also thought that previous global scaling based on weaker upwelling velocities could be underestimated.

This research is published in Nature.

The post Fluorescent dye helps reveal the secrets of ocean circulation appeared first on Physics World.

]]>
Research update For the first time, researchers have directly measured the upwelling of cold, deep water towards the ocean surface https://physicsworld.com/wp-content/uploads/2024/07/22-07-24-Scripps-Oceanography-team.jpg newsletter1
Why we need gender equality in big science https://physicsworld.com/a/why-we-need-gender-equality-in-big-science/ Mon, 22 Jul 2024 10:00:34 +0000 https://physicsworld.com/?p=115446 Elizabeth Pollitzer says measures must be taken to tackle the gender imbalance among staff and users of large research infrastructures

The post Why we need gender equality in big science appeared first on Physics World.

]]>
Investments in “big-science” projects, whether it’s the next particle collider or a new synchrotron, are often justified in terms of the benefits for science, such as the discovery of a new particle or the opening of a new vista on the cosmos. The positive impact of large facilities on society and the economy are often cited too, such as spin-off technologies in medical physics. Gender equality, however, is rarely acknowledged as a necessary objective when building these multi-billion-euro facilities or investing in the research required to develop them. That lack of focus on gender equality is something that I believe must change.

The lack of gender-based targets for big science is laid bare in a tool created as part of the European Union’s Horizon 2020 funding programme. Produced by the Research Infrastructure Impact Assessment Pathways project, it assesses the impact of research infrastructures on the economy and society via 122 “impact indicators” in four areas: human resources; policy; society; and economy and innovation. But only one indicator – contribution to gender balance in society – gives any mention to gender equality.

Yet improvements can be made when it comes to supporting female scientists in big science. Take the EU-wide ATTRACT project, which funds the development of breakthrough technologies via proof-of-concept projects. It is led by large research organizations such as the CERN particle-physics laboratory near Geneva, the X-ray Free Electron Laser in Hamburg, Germany, and the European Southern Observatory. Between 2018 and 2020, ATTRACT supported 170 projects, half of which focused on health, environment and biological and related sciences. However, only 11% of the funded ATTRACT projects had a woman as the principal investigator, even though women receive almost half of doctoral degrees in those areas.

Such numbers tell us we have a long way to go. After all, big-science facilities receive significant amounts of public money and employ thousands of people in different professional roles. We need to promote big science as a career destination not only for science graduates but also those in law, management and policy. Monitoring gender balance among the staff and users of research infrastructures and the members of big-science projects is crucial to ensuring that women graduates, who outnumber men in Europe, see big science as a place where they can thrive professionally.

In that regard, there have been some positive developments. The EU’s €96bn Horizon Europe programme – the successor to Horizon 2020 – now requires that all benefiting organizations, many of which participate in big-science projects, have a gender equality plan. Several industry sectors are also doing lots to integrate equality, diversity and inclusion into human resources practices to attract talent.

Tapping into the talent pool

But more needs to be done. That’s why since 2020 the Women in Big Science Business Forum (WBSBF) has been promoting gender equality as part of the Big Science Business Forum (BSBF). The WBSBF was set up by a group of people at Fusion for Energy, which manages the EU’s contribution to the ITER fusion experiment being built in France. The BSBF itself is trying to advance gender equality across research infrastructures, universities and supplier companies. For instance, since research infrastructures distribute billions of euros of public money in procurement and investment, they can adopt procurement processes that question the supplier’s compliance with gender-equality legislation and ask for examples of efforts they have made to recruit and retain women.

“Gender budgeting” is a tool that big-science projects can also use to assess how their budget decisions impact gender equality. That could mean eliminating the gender pay gap, making provisions for equal parental leave or ensuring that research grants are the same for projects whether led by women or men. Budgets could also be earmarked to help staff achieve a work–life balance. I think it’s important as well that we improve training in gender equality and that we “gender proof” recruitment by identifying and removing potential biases to assessment criteria that could favour men. Big-science projects can also make use of the European Charter & Code for Researchers, which includes a dozen gender-equality indicators as part of the EU initiative “human resources strategy for researchers”.

At the BSBF meeting in Granada in 2022, the WBSBF launched a recognition award to acknowledge, celebrate and promote successful measures taken by big-science organizations to increase the proportion of women among their staff and users of research infrastructures. There are three categories: “advances in organizational culture”; “collaborative partnerships”; and “societal impact”. Some 13 organizations applied for an award in 2022, with organizations such as XFEL and CERN being recognized.

The WBSBF is building on that progress at this year’s BSBF event in Trieste, Italy, in October with activities on socially responsible procurement, gender balance in work policies, and the socioeconomic impact of investment in big science. There will also be a live-streamed round-table session with leaders from big science. At Trieste, we’ll also be introducing a WBSBF trainee scheme, which will place three to five students or recent graduates on in-house trainee programmes run by labs, companies or intergovernmental bodies taking part in BSBF. Those roles don’t have to be scientific or technical, but could also be in, say, legal, communication or human resources.

Big science needs more women and I hope these initiatives will help to turn the tide. The talent pool for women is already there and big science must get better at tapping into it, not only for the discoveries that lie ahead but also for building a better relationship with society.

  • The WBSBF group comprises Francesca Fantini, Aris Apollonatos, Romina Bemelmans, Silvia Bernal Blanco, Carmen Casteras Roman, Ana Belen Del Cerro Gordo, Pilar Rosado, Maria Cristina Sordilli and Nikolaj Zangenberg
  • To find out more about the recognition award and the WBSBF trainee scheme, e-mail wbsbf@f4e.europa.eu

The post Why we need gender equality in big science appeared first on Physics World.

]]>
Opinion and reviews Elizabeth Pollitzer says measures must be taken to tackle the gender imbalance among staff and users of large research infrastructures https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Forum-Pollitzer_SML0681.jpg newsletter
The eyes have it: how to spot the difference between a deepfake portrait and a real picture https://physicsworld.com/a/the-eyes-have-it-how-to-spot-the-difference-between-a-deepfake-portrait-and-a-real-picture/ Sat, 20 Jul 2024 09:00:52 +0000 https://physicsworld.com/?p=115626 How do you spot a deepfake image of a person? The answer might be to look into their eyes.

The post The eyes have it: how to spot the difference between a deepfake portrait and a real picture appeared first on Physics World.

]]>
How do you spot a deepfake image of a person? The answer might be to look into their eyes.

That is according to astronomers at the University of Hull in the UK who say that AI-generated pictures can be unmasked by analyzing human eyes in the same way that astronomers study images of galaxies.

The team analysed reflections of light on the eyeballs of people in real and AI-generated images.

They then employed methods typically used in astronomy to quantify the morphological features of the reflections in both eyes.

“To measure the shapes of galaxies, we analyse whether they’re centrally compact, whether they’re symmetric, and how smooth they are,” notes Hull astrophysicist Kevin Pimbblet. “We analyse the light distribution.”

They found that fake images often lacked consistency in the reflections between each eye, whereas real images generally show the same reflections in both eyes.

Yet Pimbblet warns that the technique is “not a silver bullet” when it comes to detecting fake images.

“There are false positives and false negatives [so] it’s not going to get everything,” he adds. “But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

The work was presented this week at the Royal Astronomical Society’s National Astronomy Meeting in Hull.

The post The eyes have it: how to spot the difference between a deepfake portrait and a real picture appeared first on Physics World.

]]>
Blog How do you spot a deepfake image of a person? The answer might be to look into their eyes. https://physicsworld.com/wp-content/uploads/2024/07/DeepFakes_19-07-2024.jpg
Diamond dust for MRI, 4D printing creates advanced devices https://physicsworld.com/a/diamond-dust-for-mri-4d-printing-creates-advanced-devices/ Fri, 19 Jul 2024 13:20:37 +0000 https://physicsworld.com/?p=115613 Mahdi Bodaghi and Jelena Lazovic Zinnanti are our podcast guests

The post Diamond dust for MRI, 4D printing creates advanced devices appeared first on Physics World.

]]>
New and exciting technologies feature in this episode of the Physics World Weekly podcast.

Our first guest is the neuroscientist and physicist Jelena Lazovic Zinnanti, who recalls how she discovered (by accident) that nanometre-sized diamond particles shine brightly in magnetic resonance imaging (MRI) experiments. Based at Max Planck Institute for Intelligent Systems, she explains how this diamond dust could someday replace gadolinium as a contrast agent in MRI medical scans.

This episode also features an interview with Mahdi Bodaghi of Nottingham Trent University, who is an expert in 4D and 3D printing. He talks about the engineering principles that guide 4D printing and how the technique can be used in a wide range of applications including the treatment of coronary heart disease and the design of flatpack furniture. Bodaghi also explains how 3D printing can be used to create self-healing asphalt.

  • Mahdi Bodaghi is on the editorial board of the journal Smart Materials and Structures. It is published by IOP Publishing, which also brings you Physics World.

The post Diamond dust for MRI, 4D printing creates advanced devices appeared first on Physics World.

]]>
Podcasts Mahdi Bodaghi and Jelena Lazovic Zinnanti are our podcast guests https://physicsworld.com/wp-content/uploads/2024/07/18-7-24-Mahdi-and-Jelena.jpg newsletter
Robotic radiotherapy could ease treatment for eye disease https://physicsworld.com/a/robotic-radiotherapy-could-ease-treatment-for-eye-disease/ Fri, 19 Jul 2024 08:30:04 +0000 https://physicsworld.com/?p=115586 A single dose of stereotactic radiotherapy could reduce the number of eye injections needed to effectively control age-related macular degeneration

The post Robotic radiotherapy could ease treatment for eye disease appeared first on Physics World.

]]>
A single dose of radiation can reduce the number of eye injections needed to treat patients with neovascular age-related macular degeneration (AMD). That’s the conclusion of a UK-based clinical trial of more than 400 patients with the debilitating eye disease.

AMD affects 8% of adults globally, and is a leading cause of central blindness in people over 60 in developed nations. Neovascular (or wet) AMD, the most advanced and aggressive form of the disease, causes new blood vessels to grow into the macula, the light-sensing layer of cells inside the back of the eye. Leakage of blood and fluid from these abnormal vessels can lead to a rapid, permanent and severe loss of sight.

The condition is treated with injections of drugs into the eye, with most people requiring an injection every 1–3 months to effectively control the disease. The drugs inhibit vascular endothelial growth factor (VEGF), a key driver of vascular leakage and proliferation. Reporting their findings in The Lancet, the investigators suggest that SRT could eliminate 1.8 million anti-VEGF injections per year globally across all high-income countries.

STAR treatment

The STAR (stereotactic radiotherapy for wet AMD) study, led by Timothy Jackson of King’s College London, is a double-blinded trial that enrolled patients with previously treated chronic active neovascular AMD from 30 hospitals in the UK. All participants received the robotic treatment, with or without delivery of 16 Gy of radiation, at one of three UK treatment centres.

The team used a robotically controlled SRT system that delivers three highly collimated 5.33 Gy radiation beams, targeted to avoid lens irradiation and overlapping at the macula. To stabilize the eye being treated, a suction-coupled contact lens was secured to the cornea and connected to a positioning gimble with infrared reflectors. The SRT device tracked the reflectors, stopping the treatment if the eye moved out of position.

The researchers randomly allocated 274 patients to receive the 16 Gy SRT treatment and 137 to receive identical sham treatment without radiation. Immediately afterwards, all patients received a standard dose of the anti-VEGF drug ranibizumab injected into the eye.

After radiotherapy, participants visited their recruiting hospital for follow-up exams every four weeks up to the 96-week primary endpoint. During these review sessions, patients received an intraocular injection of ranibizumab each time they needed retreatment. The researchers are continuing assessments at three and four years to determine the safety and long-term efficacy of this approach.

The final study analysis included 241 participants in the 16 Gy SRT group and 118 participants in the sham group, with total of 409 patients treated and forming the safety population. The findings are encouraging: patients who received SRT required a mean of 10.7 injections after 96 weeks, compared with 13.3 injections for the conventional drug-only group.

Reducing the burden of anti-VEGF treatment would be highly beneficial for both patients and hospitals. Preliminary analyses suggest that the cost of the SRT treatment may be more than offset by the reduction in injections. The authors plan to prepare a detailed cost evaluation.

Vision outcome for both cohorts was comparable. While the sham group had slightly less worsening of best corrected visual acuity at two years, there was no statistical difference between the two. The systemic safety was also similar in the two groups, with similar rates of adverse events. Evaluation using multimodal imaging determined that 35% of the SRT-treated participants and 12% of the sham group had retinal microvascular abnormalities.

The study outcome supports the findings of a similar phase II clinical trial, INTREPID, whose results were published in 2020. The INTREPID study of 230 randomized patients showed that a single radiation dose of 16 or 24 Gy administered by SRT reduced injections by 29% for the ensuing 12 months, compared with the control group.

Jackson tells Physics World that the researchers are currently analysing data from patients reporting for their three- and four-year anniversary examinations. The data suggest increasing benefit with respect to injection frequency over time. The investigators also note that the benefits of SRT may be eroded by the introduction of newer intravitreal drugs such as faricimab, or higher doses of existing anti-VEGF drugs, which have longer dosing intervals than ranibizumab.

Writing in an accompany commentary in The Lancet, Gui-shuang Ying and Brian VanderBeek of the University of Pennsylvania Perelman School of Medicine state: “The STAR study has indicated a potential alternative treatment paradigm that appears to significantly reduce treatment burden without impacting visual acuity outcomes over two years, but additional gaps in knowledge need to be addressed before the widespread adoption of this therapy.”

They add: “If the reduction in anti-VEGF injection rate, non-inferior visual acuity results and acceptable safe profile of SRT remain through future studies, the STAR study will be a foundational piece in advancing a promising adjunctive therapy forward. Patients eagerly await the day when the injection burden is reduced, and SRT might well be a path to getting there.”

The post Robotic radiotherapy could ease treatment for eye disease appeared first on Physics World.

]]>
Research update A single dose of stereotactic radiotherapy could reduce the number of eye injections needed to effectively control age-related macular degeneration https://physicsworld.com/wp-content/uploads/2024/07/19-07-24-eye-exam-862563678-iStock_Rawpixel.jpg newsletter1
Speedy stars point to intermediate-mass black hole in globular cluster https://physicsworld.com/a/speedy-stars-point-to-intermediate-mass-black-hole-in-globular-cluster/ Thu, 18 Jul 2024 13:20:46 +0000 https://physicsworld.com/?p=115608 Hubble observation is best evidence yet for an elusive class of black holes

The post Speedy stars point to intermediate-mass black hole in globular cluster appeared first on Physics World.

]]>
Omega Centauri

The best evidence yet for an intermediate-mass black hole has been claimed by an international team of astronomers. Maximillian Häberle at the Max Planck Institute for Astronomy in Heidelberg and colleagues saw the gravitational effects of the black hole in long-term observations of the stellar cluster Omega Centauri. They predict that similarly-sized black holes could exist at the centres of other large, dense stellar clusters – which could explain why so few of them have been discovered so far.

Black holes are small and extraordinarily dense regions of space with huge gravitational fields that not even light can escape. Astronomers know of many stellar-mass black holes that weigh-in under about 100 solar masses. They are also are aware of supermassive black holes, which have 100,000s to billions of solar masses and reside at the centres of galaxies.

However, researchers know very little about the existence (or otherwise) of intermediate-mass black holes (IMBHs) in the 100–100,000 solar mass range. While candidate IMBHs have been spotted, no definite discoveries have been made. This raises questions about how supermassive black holes were able to form early in the history of the universe.`

Seeding supermassive growth

“One potential pathway for the formation of these early supermassive black holes is by the merger of intermediate mass ‘seed’ black holes,” explains Häberle. “However, the exact mass and frequency of these seeds is still unknown. If we study IMBHs in the present day, local, universe we will be able to differentiate between different seeding mechanisms.”

Häberle’s team examined the motions of stars within the globular cluster Omega Centauri, located around 17,000 light–years from Earth. Containing roughly 10 million stars, the cluster is widely believed to be the core of an ancient dwarf galaxy that was swallowed by the Milky Way. This would make it a prime target in the ongoing hunt for an IMBH within our own galaxy.

Häberle’s team analysed a series of images of Omega Centauri taken by the Hubble Space Telescope across a 20 years. By comparing the relative positions of the cluster’s stars in successive images, they identified stars that were moving faster than expected. Accelerated motion would be strong evidence that an IMBH is lurking somewhere in the cluster.

“This approach is not new, but we combined improved data reduction techniques with a much larger dataset, containing more than 500 individual images taken with the Hubble Space Telescope,” Häberle explains. “Therefore, our new catalogue is several times larger and more precise than all previous efforts.”

While some previous studies have presented evidence of an IMBH at the centre of Omega Centauri, the gravitational influence of unseen stellar-mass black holes could not be ruled out.

Seven speedy stars

Häberle’s team identified a total of seven stars at the very centre of Omega Centauri that appear to be moving much faster than the cluster’s escape velocity. Without some immense gravitational intervention, the researchers calculated that each of these stars would have left the centre of the cluster in less than 1000 years – a small blip on astronomical timescales – before escaping the cluster entirely.

“The best explanation why these stars are still around in the centre of the cluster is that a massive object is gravitationally pulling on them and preventing their escape,” Häberle claims. “The only object that can be massive enough is an intermediate-mass black hole with at least 8200 solar masses.”

The study makes Omega Centauri the best candidate in the Milky Way for having a IMBH. If confirmed, the IMBH will be the most massive black hole in the Milky Way after Sagittarius A* – the SMBH residing at our galaxy’s centre.

“To draw further conclusions and gain a statistical sample, we will now need to extend this research to other massive star clusters, where there might be still some hidden black holes,” Häberle says. The astronomers now hope that similar observations could soon be made using instruments including the Multi Unit Spectroscopic Explorer at the Very Large Telescope, and the James Webb Space Telescope’s Near-IR Spectrograph.

The research is described in Nature.

The post Speedy stars point to intermediate-mass black hole in globular cluster appeared first on Physics World.

]]>
Research update Hubble observation is best evidence yet for an elusive class of black holes https://physicsworld.com/wp-content/uploads/2024/07/18-07-24-intermediate-mass-back-hole-list.jpg newsletter1
NASA cancels delay-hit $450m VIPER lunar prospector https://physicsworld.com/a/nasa-cancels-delay-hit-450m-viper-lunar-prospector/ Thu, 18 Jul 2024 10:58:56 +0000 https://physicsworld.com/?p=115598 The mission will now be disassembled with components used for future Moon missions

The post NASA cancels delay-hit $450m VIPER lunar prospector appeared first on Physics World.

]]>
NASA has cancelled a major Moon mission despite spending almost half a billion dollars on it. The Volatiles Investigating Polar Exploration Rover (VIPER) project was originally planned to launch late last year, but in 2022 NASA delayed it until late 2024 with further issues putting the launch date back until 2025. NASA now plans to disassemble VIPER and reuse the craft’s instruments and components on future Moon missions.

VIPER, about the size of a golf cart, would have prospected the lunar south pole for water ice in the soil with the aim of creating resource maps for future missions to the Moon. The craft would have spent 100 days roaming tens of kilometres where it would have used a neutron spectrometer to detect water molecules below the lunar surface. Another component of the mission was to use a drill to dig up the soil and determining the composition and concentration of the material via two other spectrometers.

NASA had already spent $450m on VIPER and the craft was currently undergoing testing. NASA says it will save about $85m by cancelling the mission while continuing with it would have threatened the “cancellation or disruption” of other Commercial Lunar Payload Services (CLPS) missions. The CLPS involves NASA working with US companies to build and launch lunar missions.

NASA will now “pursue alternative methods” to accomplish some of VIPER’s goals. The Polar Resources Ice Mining Experiment-1 (PRIME-1), for example, is scheduled to land at the south pole later this year aboard the lunar lander IM-2 built by Intuitive Machines as part of the CLPS programme. PRIME-1 will drill into the Moon’s surface where it lands and use a mass spectrometer to measure ice samples.

“We are committed to studying and exploring the Moon for the benefit of humanity through the CLPS program,” notes Nicola Fox, associate administrator for NASA’s Science Mission Directorate. “The agency has an array of missions planned to look for ice and other resources on the Moon over the next five years. Our path forward will make maximum use of the technology and work that went into VIPER, while preserving critical funds to support our robust lunar portfolio.”

Phil Metzger, director of the Stephen W. Hawking Center for Microgravity Research and Education at the University of Central Florida, said on X that the cancellation is a “bad mistake” and the mission would have been “revolutionary”. “VIPER was going to be an important step towards answering the question ‘are we alone in the cosmos?’” he says. “Other missions don’t replace what is lost here.”

Fox says NASA has already notified Congress of the decision, but Metzger now wants Congress to find the money to continue the mission. “[The cancellation] will be harmful to sustainability in space exploration, to geopolitical challenges in space, and to the most important, science,” he adds.

The post NASA cancels delay-hit $450m VIPER lunar prospector appeared first on Physics World.

]]>
News The mission will now be disassembled with components used for future Moon missions https://physicsworld.com/wp-content/uploads/2024/07/18_07_24-VIper-2-small.jpg
Industrial cryogenics and nanopositioning: into the fast-lane for quantum innovation https://physicsworld.com/a/industrial-cryogenics-and-nanopositioning-into-the-fast-lane-for-quantum-innovation/ Thu, 18 Jul 2024 09:21:53 +0000 https://physicsworld.com/?p=115555 Germany’s attocube is leveraging its deep domain knowledge in quantum science and technology to support industrial and academic customers

The post Industrial cryogenics and nanopositioning: into the fast-lane for quantum innovation appeared first on Physics World.

]]>
The nascent quantum technology supply chain has reached an inflection point as companies large and small – among them household names like Google, Microsoft and IBM as well as a new wave of ambitious start-up ventures – shift gears to translate their applied research endeavours into at-scale commercial opportunities in quantum computing, quantum communications and quantum metrology. At the heart of this emerging quantum ecosystem is attocube, a German manufacturer of specialist nanotechnology solutions for research and industry, which is aligning its product development roadmap to deliver the R&D and manufacturing tools needed to support the scale-up and commercialization of next-generation quantum technologies.

“We are facilitators of cutting-edge quantum R&D and technology innovation,” explains Khaled Karraï, co-founder and scientific director at attocube. That starts and ends, of course, with a granular understanding of the customers’ evolving requirements – not least when it comes to navigating the complex transition from research lab to manufacturing and, ultimately, long-term commercial impact. “Quantum is in our DNA at attocube, so we are extremely well positioned to service the needs of the quantum supply chain,” argues Karraï. “After all,” he adds, “we have worked hand-in-hand with quantum scientists in the academic world for the past 30 years. Many of those pioneering researchers are now in senior R&D and engineering positions in industry – and they’re coming to us for the enabling technologies they’ll need for the next stage on the quantum roadmap.”

Multiphysics innovation

By extension, the supporting product portfolio at attocube covers a lot of bases, including compact and low-vibration closed-cycle cryostats (with low-heat-generation compressors) and precision-motion components (such as nanopositioners and displacement-measuring interferometers) to align, operate and test advanced quantum components/subsystems. Downstream, those attocube products are put to work across a range of operating conditions – from ambient to ultralow temperatures, from low to ultrahigh vacuum, as well as within tightly constrained magnetic fields – to maintain the delicate quantum states and processes – think single-photon sources, trapped ions or superconducting or photonic qubits – within the core building blocks of quantum computing systems and quantum communication networks.

“Low-temperature nanopositioning and low-vibration cryogenics are among our core competencies at attocube,” notes Karraï. Equally important is what he calls a “multiphysics and multiengineering mindset” to ensure an integrated approach to product design and engineering. “Take position sensing at the nanoscale,” says Karraï. “This is a tricky enough proposition at room temperature, but it requires all sorts of innovative thinking when you throw in additional operating constraints like cryogenic temperatures, high vacuum and the need for miniaturization.”

Putting the focus on translation

Meanwhile, as quantum technology companies eye sustainable commercial opportunities over the near and medium term, the focus must necessarily shift to “productization” and hard-and-fast industrial metrics like scalability, reliability, manufacturability, robustness and cost:performance. Along that same coordinate lies a consideration of the operational running costs associated with first-generation quantum systems. A case in point: the energy-efficiency of the quantum repeaters that will be required every 100 km or so to boost optical signals within the quantum communication network – and, ultimately, across the quantum Internet.

Khaled Karraï, Attocube

“Take a scenario where you want to cool an optical detector in a quantum repeater to, say, 2 K,” explains Karraï. “Today, you have to put 3 kW of energy in to generate something like 20 mW of cooling power. Now imagine that imbalance inside every quantum repeater within the long-haul fibre-optic network – the planet will be glowing.”

The answer, he believes, is the recently launched attoCMC, a compact and rack-mountable cryostat for in-field deployment (see “Working together to realize quantum advantage”, below). “With the attoCMC, we are delivering an order-of-magnitude reduction in energy consumption for the cryogenic subsystem,” claims Karraï. “This represents a step-change in energy-efficiency for quantum technology companies, giving them access to enhanced cooling capabilities for their distributed quantum computing and networking systems.”

Better together

Notwithstanding an over-arching emphasis on platform technologies that “unlock the creativity, ingenuity and imagination of our end-users”, Karraï highlights another key differentiator of the attocube working model – specifically, a vendor–customer relationship that moves beyond the transactional into the realm of collaborative R&D and co-development.

attoCMC cryostat

It helps, in this regard, that many quantum technology companies have spun out from academia, subsequently recruiting specialists in industrialization and scale-up from other, more established tech sectors. “As a specialist equipment provider,” notes Karraï, “we need to be able to talk on a couple of levels with these customers – engaging their scientists and engineers on the one hand as well as the new breed of manufacturing specialists who want ready-made cryogenic or nanopositioning solutions built to their custom specification.”

The secret of success, argues Karraï, lies in an open, honest dialogue upfront between equipment supplier and customer. “It’s that informed conversation around technical requirements that’s so valuable at the initiation point,” he concludes. “Many of our product engineers have PhDs in quantum science and engineering, so are the best anchor-point for that dialogue around requirements-gathering. From here, our industrial customers quickly realize they can learn an awful lot by tapping into our collective domain knowledge in quantum technologies.”

Working together to realize quantum advantage

Quandela is a European start-up company that aims to accelerate the industrial and commercial roll-out of photonic quantum computing technologies. The Quandela development programme spans on-premises quantum computing systems (for deployment in data centres and supercomputing facilities); Quandela Cloud, a “quantum computing as a service” offering; and co-development (with industry partners) of quantum software for diverse use-cases in sectors like logistics, automotive, pharmaceuticals and finance. Here Niccolo Somaschi, CEO of Quandela, tells Physics World about his team’s strategic technology relationship with attocube.

What makes attocube a preferred supplier for Quandela?

Niccolo Somaschi, Quandela

It’s very simple: attocube prioritizes engineering excellence across a specialist product offering that guarantees bulletproof reliability for academic and industrial customers alike. Equally important, the attocube product development team listens to the market and, by extension, is ideally positioned to deliver industry-ready cryogenic and nanopositioning solutions that will enable quantum computing and quantum networking technologies to be deployed at-scale. It’s a visionary approach to product innovation.

How important is attocube’s long track-record of working with quantum researchers in academia?

Like attocube, Quandela was originally formed on the back of university research – in our case, translating academic quantum science and proof-of-concept R&D into industrial outputs and commercial growth. Quandela scientists and engineers are also long-time collaborators with attocube, relying on the company’s closed-cycle cryostats – specifically, the attoDRY800 and the attoDRY1000 – to support our previous academic research efforts. Today, seven years after its launch, Quandela is still a research-intensive company. As such, attocube remains a core technology partner, delivering the advanced research tools we need in the R&D lab, while working with us to navigate the transition to industry-grade solutions for the quantum supply chain.

How are attocube products supporting the Quandela development programme?

A case in point is the attoCMC, a new compact and rack-mountable cryostat system (with a base temperature of 2.3 K). Quandela saw the need for a cryogenic solution like the attoCMC even before its conception, so we were pleased to be among the first customers to evaluate early-stage prototypes. The attoCMC is now integrated as a core building block in Prometheus, our stand-alone single-photon source that’s designed to take quantum computing, quantum communications and quantum metrology applications out of the lab and into the field. Put simply, Prometheus delivers high-quality photonic qubits – on demand, deterministic, indistinguishable – at unprecedented rates, giving academic and industry users access to single photons at the push of a button.

The post Industrial cryogenics and nanopositioning: into the fast-lane for quantum innovation appeared first on Physics World.

]]>
Analysis Germany’s attocube is leveraging its deep domain knowledge in quantum science and technology to support industrial and academic customers https://physicsworld.com/wp-content/uploads/2024/07/web-2024-03-Public-Address-x-OVHCloud-Inauguration-Ordinateur-Quantique-1571.jpg newsletter
Why North America has a ‘tornado alley’ and South America doesn’t https://physicsworld.com/a/why-north-america-has-a-tornado-alley-and-south-america-doesnt/ Thu, 18 Jul 2024 08:00:12 +0000 https://physicsworld.com/?p=115592 There’s a scientific reason why Twisters is set in the US Great Plains rather than Argentina, and it has to do with the Gulf of Mexico

The post Why North America has a ‘tornado alley’ and South America doesn’t appeared first on Physics World.

]]>
My home state of Kansas is famous for being very flat and having lots of tornadoes, so when I read that scientists in nearby Indiana have found a connection between flatness and tornado risk, I was intrigued.

Turns out, it’s not Kansas’ own flatness that’s to blame. Instead, scientists at Purdue University say that its exciting weather is due to the flat surface of the Gulf of Mexico. Together with other geographic factors, they argue, the ocean’s smoothness is what turns Kansas and neighbouring states into an ideal setting for films like the 1996 summer blockbuster Twister and its just-released sequel Twisters.

The scientists’ argument begins with a piece of conventional wisdom. The “tornado alley” of the North American Great Plains is commonly attributed to two geographic features: the Rocky Mountains to the west and the Gulf of Mexico and the Caribbean Sea to the south. When trade winds hit the east slope of this north–south mountain range, they turn northward and increase in speed while developing what meteorologists call “anticyclonic shear vorticity” – a fancy way of saying that the air starts to rotate counterclockwise. At the same time, southerly winds from the tropical Gulf pump warm, moist air into the lowest layer of the atmosphere. Together, these phenomena create conditions that favour severe thunderstorms and the tornadoes they spawn.

There’s just one problem with this story. The central region of South America (Uruguay and parts of Argentina, Paraguay and southern Brazil) is also next to a prominent north–south mountain range: the Andes. It also has a ready source of warm, moist air: the Amazon basin. And it also experiences a lot of severe thunderstorms – more thunderstorms, in fact, than central North America, with thunderclouds that extend further into the atmosphere. But tornadoes are much less common there, and the conventional wisdom can’t explain why.

An extra factor

In their study, which is published in PNAS, Purdue’s Dan Chavas and his then-PhD student Funing Li, together with colleagues at the US National Center for Atmospheric Research, Stony Brook University, and Colorado State University, sought an explanation in a previously overlooked difference between North and South America. While the surface of the Gulf of Mexico and Caribbean Sea is smooth, they noted, the similarly warm-and-moist Amazon basin is heavily forested and contains terrain such as plateaus and highlands. Might this roughness explain the absence of a South American tornado alley?

Photo of Dan Chavas in front of trees and flowers

To test this hypothesis, the scientists performed experiments using a global climate model. In the first experiment, they flattened a computerized version of the Amazon basin to ocean-like smoothness and modelled the resulting tornado potential in central South America. In the second experiment, they did the opposite, filling in the digital Gulf of Mexico and observing how this affected tornado potential in central North America.

The results were striking. The smoothed-out version of South America experienced around twice as many tornadoes as the real-world version. Northeastern Argentina was particularly hard-hit. Conversely, a filled-in Gulf in the model version of North America reduced the number of tornadoes by up to 41%, with the biggest drops seen in the Great Plains and the southeastern US.

As a solution to Kansas’ tornado problem, this finding isn’t terribly useful. Human geoengineers are not going to start filling in the Gulf of Mexico any time soon, and the natural processes that might do it would be cataclysmic. (Let’s just say that a few extra twisters would be the least of our problems.) But the research does have some practical implications. Rampant deforestation is making the Amazon basin smoother. Forest regrowth is making the eastern US slightly rougher. According to the researchers, such changes could affect tornado frequency, though the exact nature of the effect is hard to predict.

“An important question is how terrain and land cover may alter the response of tornadoes in the future, as climate change may shift the large-scale atmospheric circulation and the geographic patterns of severe thunderstorm and tornado activity that it produces,” they write. “We hope our study motivates future research exploring those additional factors.”

The post Why North America has a ‘tornado alley’ and South America doesn’t appeared first on Physics World.

]]>
Blog There’s a scientific reason why Twisters is set in the US Great Plains rather than Argentina, and it has to do with the Gulf of Mexico https://physicsworld.com/wp-content/uploads/2024/07/tornado-149065043-Shutterstock_solarseven.jpg newsletter
Aperiodicity: the dance event bringing non-repeating patterns to life https://physicsworld.com/a/aperiodicity-the-dance-event-bringing-non-repeating-patterns-to-life/ Wed, 17 Jul 2024 15:40:24 +0000 https://physicsworld.com/?p=115584 Matin Durrani reviews Aperiodic – an art-science performance from South West Dance Theatre

The post Aperiodicity: the dance event bringing non-repeating patterns to life appeared first on Physics World.

]]>
We all like order in our lives (well, I do at least) but things rarely operate that way. In fact, the world is full of “aperiodic” order – patterns that aren’t totally repetitive but not completely random either.

Aperiodic patterns can be found in Islamic art, such as the Tomb of Hafez and the Nasr ol Molk mosque in Iran. They’ve been studied in the 17th century by Johannes Kepler and more recently by Roger Penrose, famous for his aperiodic “Penrose tilings”.

You can also see aperiodicity in quasicrystals, such as aluminium palladium manganese. In fact, Dan Schechtman won the 2011 Nobel Prize for Chemistry for discovering these materials.

Aperiodicity is now the theme of a festival of art, science, music and performance that’s been taking place this summer in Bristol, UK. It’s included an academic meeting, an exhibition and a wonderfully entertaining dance performance that I attended last week.

Anna Demming and three other dancers stood back to back

Organized by my former Physics World colleague Anna Demming, a physicist and founding director of the South West Dance Theatre, the event was held at the city’s Trinity Centre. It began with a fun lecture on the science of aperiodicity by Sean Dewar, a mathematician from the University of Bristol, followed by 10 short dances that brought the subject alive.

Featuring Demming along with Katarzyna Niznik, Silvia Orazzo and Sebastián Morales Castillo, each dance – which combined classical ballet with breakdancing – was performed to a brilliantly eclectic mix of music ranging from Nina Simone and the BeeGees to Johann Sebastian Bach and Miley Cyrus.

I can’t imagine anyone has ever previously tried turning themes such as “Kepler’s tessellating pentagons”, “the limits of determinism” and “non-local indistinguishability and equivalence” into dance form, but that’s exactly what was on offer at the event.

There was even a nod to Hofstadter’s butterfly accompanied by Puccini’s O Mia Babbino Caro (pictured in the image at the top of this blog).

It was great fun – and even if the precise links of each dance with aperiodicity were slightly lost on me, I don’t think that really mattered. It was an event that simply got me – and the rest of the audience as well – thinking.

The post Aperiodicity: the dance event bringing non-repeating patterns to life appeared first on Physics World.

]]>
Blog Matin Durrani reviews Aperiodic – an art-science performance from South West Dance Theatre https://physicsworld.com/wp-content/uploads/2024/07/aperiodic-lores-3.jpg newsletter
Spin-ice superconductors display magnetic nonreciprocity https://physicsworld.com/a/spin-ice-superconductors-display-magnetic-nonreciprocity/ Wed, 17 Jul 2024 13:00:04 +0000 https://physicsworld.com/?p=115566 New structure could be used to make magnetic-field-driven superconducting diodes for low-energy-consumption electronic devices

The post Spin-ice superconductors display magnetic nonreciprocity appeared first on Physics World.

]]>
Researchers in China have fabricated a new hybrid superconducting device from a special type of material known as an artificial spin ice (ASI). The innovative structure, which is made of asymmetric nanomagnets, could be used to build magnetic-field-driven superconducting diodes for use in energy-efficient electronics.

ASIs get their name from the fact that at low temperatures, their magnetic moments adopt the same disordered pattern typified by proton spins in water ice. They have a tetrahedral structure, with rare-earth ion moments occupying the corners in a way that obeys the so-called “ice rules”: two of the moments point into the tetrahedron, while two point out of it. In this configuration, the moments are unable to align, and the material is said to be geometrically frustrated.

The behaviour of the new ASI-based device is driven by a phenomenon known as the magnetic nonreciprocal effect, in which a material displays zero resistance along the direction of an applied magnetic field while continuing to have resistance in the opposite direction. “This is analogous to the behaviour of a superconducting diode and is a recently-discovered effect that is creating a flurry of interest in the field,” explains Yong-Lei Wang of Nanjing University, who led the research.

Asymmetric nanomagnets

To induce magnetic nonreciprocity, Wang and colleagues made their ASI from asymmetric nanomagnets. They created these nanomagnets by depositing a thin film of molybdenum germanium superconductor onto a silicon wafer using photolithography and magnetron sputtering techniques. They then fabricated the artificial spin ice on top of this structure, using electron beam lithography and evaporation to create an ASI with the nanomagnets arranged in a square lattice.

“Distinct from all previous ASIs, however, this structure contains asymmetric nanomagnets as opposed to symmetric ones,” explains Wang. “This leads to a novel superconducting pinning potential, resulting in the asymmetric motion of superconducting vortices when positive and negative magnetic fields are applied, thus allowing us to observe magnetic nonreciprocity.”

The Nanjing team has been working on ASI-superconductor heterostructures since 2018, when its members first reported on switchable geometric frustration and superconducting vortex diode effects. Two years later, the researchers made a switchable superconductor and programmable flux-quantum Hall effect device using another ASI-superconductor hybrid. Then, in 2021, they followed this by producing a superconducting diode in arrays of conformal-patterned nanoholes in superconducting thin films. “This last device works thanks to the spatial inversion symmetry breaking from the nanoholes and it allowed us to understand that the asymmetric nanomagnets in ASIs could induce unique symmetry breaking and lead to interesting superconducting effects,” Wang says.

The team’s findings could have implications for the development of advanced superconducting electronics, he tells Physics World. “Being able to control and reconfigure vortex dynamics in superconductors can lead to innovative devices such as magnetic field-driven superconducting diodes and rectifiers. These applications are particularly promising for low-power electronics, neuromorphic computing, and advanced sensing technologies.”

The researchers now plan to examine how temperature affects the magnetic nonreciprocal effects they observed. “We will also study the hysteresis behaviour of in-plane magnetic fields to enhance the nonreciprocal ratio of these effects,” reveals Wang. “We also plan to apply our method to other types of ASI structures, such as kagome-ASI and pinwheel-ASI, to explore a wider range of superconducting properties and functionalities.”

They detail their present work in Chinese Physics Letters.

The post Spin-ice superconductors display magnetic nonreciprocity appeared first on Physics World.

]]>
Research update New structure could be used to make magnetic-field-driven superconducting diodes for low-energy-consumption electronic devices https://physicsworld.com/wp-content/uploads/2024/07/16-07-2024-NJU-team_web.jpg newsletter1
Claudia de Rham: a life in gravity https://physicsworld.com/a/claudia-de-rham-a-life-in-gravity/ Wed, 17 Jul 2024 10:00:38 +0000 https://physicsworld.com/?p=115535 Claudia de Rham on her new book The Beauty of Falling and her career in theoretical physics

The post Claudia de Rham: a life in gravity appeared first on Physics World.

]]>
What made you decide to write this book?

I wanted to share something I find very exciting, but also [show] how we do science and the fun in doing it. I think we are all scientists in that we can all explore and understand. Most of all, I wanted to connect with people who don’t realize how much they have it in themselves, in their curiosity, to address some of the questions we all ask. That they are part of this human endeavour, this human journey of pushing the frontiers of knowledge.

Why did you call it The Beauty of Falling?

The book explains how our understanding of gravity evolved – how Einstein’s general theory of relativity provides a very good description but fails at a point. The “beauty of falling” is that the theory does fail. It shows us that there’s not an ultimate truth, there’s not an ultimate knowledge and that’s “it”. Science is always a continuous endeavour.

So failure is a good thing, right?

Failure is the best feature that a theory can have because it tells us where to look for new discoveries. We don’t need to do lots of observations and wait until there is a mismatch between observations and the theoretical projections. We know from the outset that eventually we’ll need to uncover new layers of physics – and that to me is very exciting. When you know something will fail, how do you embrace that and move forward?

How do you convert mathematical ideas into language?

For me, the symbols on a blackboard: this is the language. Theorists use mathematics to express thoughts in a rigorous technical way. But after a while, this almost becomes second nature so you don’t even think about the symbols themselves. The concepts, and how they fit together, become much more natural. Setting those equations back in words, and [explaining] what it means practically for the real world, is very important as a physicist. Having a real intuition for how things work is important.

  • Click here to read Kate Gardner’s review of The Beauty of Falling.

The post Claudia de Rham: a life in gravity appeared first on Physics World.

]]>
Interview Claudia de Rham on her new book The Beauty of Falling and her career in theoretical physics https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Gardner-de_rham_claudia.jpg
How gravity falls down on falling down https://physicsworld.com/a/how-gravity-falls-down-on-falling-down/ Wed, 17 Jul 2024 10:00:04 +0000 https://physicsworld.com/?p=115461 Kate Gardner reviews The Beauty of Falling: a Life in Pursuit of Gravity by Claudia de Rham

The post How gravity falls down on falling down appeared first on Physics World.

]]>
Claudia de Rham decided early in life she wanted to be an astronaut. Her peripatetic childhood meant she spoke several languages, but imperfectly, and was drawn to science as a universal language – a more reliable base from which to understand the world. As her ambition to go into space developed, she learned to scuba dive and to fly planes, having heard these are desirable skills for astronauts.

Years of intense training appeared to pay off when, in 2009, de Rham made it to the final round of astronaut selection for the European Space Agency. In the end, she passed every test except the medical screening, which picked up latent tuberculosis – ruling her out not only in 2009 but forever. Thankfully, de Rham had also been pursuing a career in theoretical physics. So when her astronaut dream evaporated, she threw herself into the hardest challenge she’d ever experienced: understanding gravity.

Her new book The Beauty of Falling: a Life in Pursuit of Gravity is by no means an autobiography wrapped up in popular science, but it does use snippets from de Rham’s life story to explore the science of gravity. Sometimes the connection is straightforward: learning to dive and to fly provides some great visuals for explaining the basics of gravity and its interaction with other major forces. Other parallels are less obvious. Having moved around a lot as a child and as a researcher, de Rham likens the shape of her itinerant life to the curved nature of space–time. The metaphor works; perhaps better than the more standard one of a ball on a trampoline.

Unlike many popular-physics books, de Rham doesn’t linger too long on the history of human understanding of her topic. Three chapters suffice to bring us up to gravitational waves (or “glight” as she prefers to call it) and how we detect them. She then describes how observations of gravity break the rules of general relativity and the various theories of gravity that might be the next step in understanding.

This is where we reach de Rham’s own research on massive gravity, for which she and collaborators have won several prizes and grants. They argue that gravity, like electromagnetism, is both a particle and a wave. And, crucially, that a particle with non-zero mass – the graviton – exists. Its mass would have to be minuscule, of course – they postulate less than 10–30 eV. Gravitons would therefore be difficult – if not impossible – to find. But if they exist, gravitons would resolve some pesky problems in physics.

The book deals with some hugely complex theoretical ideas. Towards the end, even de Rham’s genius for metaphor just can’t keep up and she has to present the actual mathematics. As she says at that point, if a reader has stuck with her this far, they won’t be afraid of a few equations. It’s a reasonable assessment, because The Beauty of Falling is not a light read. While every step is clearly explained, the information is packed deep, tight and dense.

This density may well put off some lay readers, which is a shame, as de Rham is clearly capable of making complex ideas accessible individually. But I got the feeling that the density is intentional, as if de Rham has a point to prove about herself and her capabilities.

Though it isn’t a major theme even in the short biographical sections of the book, de Rham is of course a woman working in a male-dominated field, with theoretical physics being particularly short of women. It’s shocking to hear there were times in her education when de Rham, despite only being in her 40s, was the only woman in the room and that she was explicitly told women couldn’t understand advanced physics. She also struggled for years longer than male colleagues (including her own husband) to find a permanent position.

Thankfully she found a stable home at Imperial College, London, in 2016, where she’s been based ever since. Though she has wound up working on an idea that has been dismissed and even ridiculed by some, it’s clear that de Rham is not deliberately courting controversy; she loves her research. She finds massive gravity compelling and hopes that one day there will be a way to test it experimentally.

The Beauty of Falling is a reminder that the human side of science also includes the formulation of high-level abstract concepts – it’s in our nature to understand beyond what we can observe. Not every reader will follow the minutiae of massive gravity, but we can all empathize with the desire to comprehend it.

  • 2024 Princeton University Press 232pp £20hb
  • Physics World‘s Matin Durrani interviewed Claudia de Rham about this book and her career. Read the interview here

The post How gravity falls down on falling down appeared first on Physics World.

]]>
Opinion and reviews Kate Gardner reviews The Beauty of Falling: a Life in Pursuit of Gravity by Claudia de Rham https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Gardner-scuba-diver-under-water-16-9-crop-459287089-iStock_Rostislavv.jpg newsletter
Structured electrons have chiral mass and charge https://physicsworld.com/a/single-electrons-follow-structured-chiral-paths/ Tue, 16 Jul 2024 13:15:33 +0000 https://physicsworld.com/?p=115580 Discovery has puzzled some physicists

The post Structured electrons have chiral mass and charge appeared first on Physics World.

]]>
This article has been updated because the original version incorrectly claimed that the observed electrons follow “chiral paths” and “chiral trajectories”.

Structured electrons with chiral mass and charge have been created by researchers in Germany. The researchers say their work, which is analogous to work done with photons in 2010, achieves chirality in single-electron matter waves without angular momentum. Some other researchers, however, are puzzled by this claim.

In 2010, David Grier of New York University and colleagues created helical optical beams with much greater intensity at the beam edge than at the centre. Then, they showed that objects could be trapped by such beams and, depending on the chirality of the helix, pulled back towards the light source.

“At the time, we speculated that you ought, in principle, to be able to drop matter waves into one of these states,” says Grier. “People have controlled matter waves with light; they’ve created vortices in matter waves, but as far as I know no one has taken the next step and created not just a topological phase but a topological intensity distribution.”

Femtosecond electron pulses

In the new work, Peter Baum and colleagues at University of Konstanz fired femtosecond electron pulses (almost none of which contained more than one electron) from an ultrafast transmission electron microscope at a 50-nm-thick silicon nitride membrane. They directed optical laser pulses with orbital angular momentum at the same membrane.

The silicon nitride was transparent to electrons, but the laser pulses shifted the electron density in the membrane such that parts of the electron’s wavefunction were accelerated and other parts decelerated. This created single electrons with chiral mass and charge distributions. The researchers characterized these with a second femtosecond laser and silicon nitride membrane.

The team showed that, if they used laser pulses with zero angular momentum, the output could be modelled by electrons with no chirality. If the angular momentum quantum number was 1, the electronic charge and mass wavefunction has the chirality of a single, left-handed coil. If it was −2, the wavefunction was a right-handed double helix. They also scattered these chiral wave packets off chiral nanoparticles, with a left-handed electron showing less chirality when scattered off a left-handed nanoparticle and extra chirality when scattered off a right-handed nanoparticle and vice versa.

Imprinting chirality

The researchers explain that the optical pulses imprint chirality onto the electron’s wavefunction, converting it into a coil of charge and mass, without actually giving the electron either polarization or orbital angular momentum. “The coil propagates as a whole,” explains Baum; “the centre of mass is on a line.”

The researchers believe that these properties could be useful in a range of applications including electron microscopy, the study of magnetic materials, and construction of subnanometre optical tweezers. It could even, they say, have cosmological implications if such electron coils occur in nature.  “Are they all around?,” ponders Baum, “We are currently starting to explore these possibilities.”

Grier is both impressed and puzzled by the results. “Light you can control essentially with consumer electronics,” he says, It’s much harder with matter waves. I consider this [work by Baum’s team] a really interesting implementation in matter waves of what my group demonstrated in 2010 in light waves.” He does note, however, that other groups have previously implemented chiral optical beams and shaped the intensity of electron beams, and that this research was not cited by Baum and colleagues in their paper in Science that describes their research. (Baum’s team accepts this and say they were unaware of the previous work.)

He is perplexed, however, by the researchers’ insistence that the chiral electron beams have no orbital angular momentum. He says that the chiral wavefunctions the researchers have achieved can generally be expressed as superpositions of so-called Bessel modes. “All but a special few of those superpositions carry orbital angular momentum through a net helical structure in the overall phase,” he says. “You have to do something a bit special to create a solenoidal mode with no orbital angular momentum. I don’t see how [Baum’s team] achieved that balancing act, or how they verified it. It seems to me that they just assume it to be so.”

Miles Padgett of the University of Glasgow in Scotland says that the research is “lovely” and says he “would happily accept that there is something interesting in generating an electron beam which goes above and beyond generating a solenoid optical beam.” He says, however, that “that’s not what [their] paper tells me, because this paper doesn’t recognize the generation of a previous optical solenoid beam [Grier’s work is not cited].” He is also puzzled by the claim of chirality without angular momentum, and is curious about whether or not the chiral electrons generate DC magnetic fields – which would indicate rotation.  The researchers have not experimentally measured this.

The post Structured electrons have chiral mass and charge appeared first on Physics World.

]]>
Research update Discovery has puzzled some physicists https://physicsworld.com/wp-content/uploads/2024/07/16-7-24-light-spiral-web-19812591-iStock_VikaSuh.jpg
An evening of landscape astrophotography https://physicsworld.com/a/an-evening-of-landscape-astrophotography/ Tue, 16 Jul 2024 10:00:26 +0000 https://physicsworld.com/?p=115413 Colin White recounts a trip that took him from a Somerset hilltop to the heart of the Milky Way

The post An evening of landscape astrophotography appeared first on Physics World.

]]>
Burrow Farm Engine House on Exmoor

Shaun suggested our initial face-to-face meet should be at around 11 p.m., on a remote track on the flanks of the wild hinterland of Exmoor National Park in Somerset, UK. We were to rendezvous at the rather delightful what3words coordinates of: ///otters.grins.greet near a monument colloquially named “Naked Boy Stone”.

As is usual when I venture onto the moor, I informed my son before setting off. Mobile signals can get a bit iffy above about 300 m, although what I expect him to do from Oxford, more than 150 km away, I’m not altogether sure. In hindsight, I might also have chosen my words more carefully when I told him not to worry about meeting this man because “We’d followed each other on Twitter/X for years”. Perhaps I should have begun by telling him that Shaun Davey is an acclaimed photographer of the night skies.

Under the illumination of our head torches, we set off, traipsing along a branch-strewn trail that followed a section of the disused West Somerset Mineral Line. Built between 1857 and 1864, the rail track was constructed to carry iron ore from the mines of Exmoor down to Watchet Harbour, for onward shipment to Wales and the steel blast furnaces of the Ebbw Valley.

The last trains ran in 1913, and the isolated location was ideal for our purposes, though it did make our journey somewhat perilous – the edge of the unlit path ran along a deep rail cutting. It was amazing to think that it would have been carved out by manual labour alone. However, my first thought was “that’s one hell of a drop if we slip”.

Our destination was a small stone building that had once housed a steam engine, used to raise and lower the miners and to pump water from the tunnels. Illuminated by a handheld LED lamp, it cast a dramatic picture against the stars.

It’s no mean feat for a photographer to capture objects that are both a few metres and a few light-years away in a single image, but this is where Shaun’s vast experience of landscape astrophotography came into play. He set up two tripod-mounted cameras; one to photograph the landscape and the other to capture the sky, each using different exposure times. Correct exposures are governed by three variables: the shutter speed, the lens aperture and the ISO (detector sensitivity). All are adjustable, so picking the right settings is an art.

When it came to photographing the sky, Shaun mounted the camera on a device fitted with the most powerful laser I’ve ever seen outside a laboratory. He calibrated the camera platform by aligning the laser beam with the polar star. I’d never been so aware of the extent of laser collimation until I witnessed the needle-sharp beam projecting deep into the blackness of the heavens. Once the tracking mount has been aligned, the attached camera can precisely follow the motion of celestial bodies as the Earth rotates. Without the tracking tripod, you’d just get blurred slashes caused by unwanted star – or rather Earth – motion.

It’s no mean feat for a photographer to capture objects that are both a few metres and a few light-years away in a single image

After the photos had been taken, Shaun post-processed them with Photoshop and Lightroom to correct for colour and optical distortions (he uses a wide-angle lens that can make straight lines appear curved). Finally, the images were merged to create one stunning photograph.

Light pollution is, of course, the biggest irritant for astrophotographers, which is why Shaun picked a cloudless night and a location within one of the six designated Dark Sky Reserves of the UK. Nevertheless, light pollution was evident even at 2.30 a.m. Some sources were obvious from the direction of the glow: the industry of Wales (50 km away), Tiverton/Cullompton and/or Exeter (all approximately directly south, and at about 19, 23 and 40 km, respectively), and the Sun. In midsummer, when our expedition took place, it’s never far below the northern horizon.

This adventure kicked off after I posted on X that I’d never seen the Milky Way with my naked eye in my life (I’m 70 years old). Shaun replied with a promise that he could knock that one off my bucket list by showing me the galactic core (the brightest region of the Milky Way at the centre of our galaxy). He kept his promise.

  • Readers are invited to submit their own Lateral Thoughts. Articles should be between 750 and 800 words long, and can be e-mailed to pwld@ioppublishing.org

The post An evening of landscape astrophotography appeared first on Physics World.

]]>
Blog Colin White recounts a trip that took him from a Somerset hilltop to the heart of the Milky Way https://physicsworld.com/wp-content/uploads/2024/07/2024-07-LT-White-burrowhouse_feature.jpg newsletter
Lung images reveal how breathing distribution differs between the sexes https://physicsworld.com/a/lung-images-reveal-how-breathing-distribution-differs-between-the-sexes/ Tue, 16 Jul 2024 08:00:14 +0000 https://physicsworld.com/?p=115563 Electrical impedance tomography data demonstrate that ventilation is more evenly distributed in women than in men

The post Lung images reveal how breathing distribution differs between the sexes appeared first on Physics World.

]]>
Example EIT exams in a man and a woman

Electrical impedance tomography (EIT) is a radiation-free, non-invasive imaging technique that uses measurements from surface electrodes to create a tomographic image of the body. EIT is particularly suited to assessing lung function: while conventional radiological imaging reveals lung morphology, EIT can be used to directly monitor regional lung ventilation and perfusion, and track changes due to disease or therapy.

To date, however, few EIT studies have examined the respiratory differences between men and women. To address this shortfall, a team from the University Medical Centre Schleswig-Holstein in Kiel, Germany has performed a detailed study of large group of volunteers to investigate how biological sex affects EIT measurements. Their findings suggest that key differences exist, and that these must be considered when interpreting clinical chest EIT studies.

“The anatomical and physiological differences between women and men make it important to determine sex differences in medical research,” explains first author Inéz Frerichs. “The relative lack of studies on sex-dependent differences in EIT findings is related to the fact that the method is not so old and large population studies in this field still need to be conducted. Small pilot studies do not have the power to establish possible differences reliably.”

Detailed data analysis

EIT works by applying small alternating currents through pairs of electrodes attached to the skin near the organ being examined. Measuring the resulting voltage differences through all adjacent electrode pairs enables construction of a 2D tomographic map of the electrical conductivity of that body part.

In their latest study, described in Physiological Measurement, Frerichs and colleagues analysed data from 218 adults with no known lung disorders who had participated in a previous EIT study. The cohort included 120 men and 98 women, with no significant differences between the subgroups asides height and weight. The team examined EIT recordings obtained during about one minute of quiet relaxed breathing (known as tidal breathing), with the subject seated.

The EIT exams were performed using an array of 16 electrodes placed around the lower chest, with the reference electrode attached onto the belly. Data were acquired at a rate of 33 images/s – a high scan rate that allows continuous assessment of dynamic physiological processes.

For all subjects, the researchers calculated the tidal impedance variation (TIV) for each image pixel in each recorded tidal breath and created 2D plots showing the distribution of tidal volume in the chest cross-section. They then quantified this spatial distribution using a series of established EIT measures: the centre of ventilation in the ventrodorsal (front-to-back) direction (CoVvd) and the right-to-left direction (CoVrl); and the dorsal and right fractions of ventilation.

The overall results from the male and female subgroups revealed significant differences in ventilation distribution between men and women. In the right-to-left direction, women exhibited a more symmetric distribution between the right and left lung regions than men, with CoVrl located more at the right side of the chest in male subjects than in female subjects. The right fraction of ventilation was also higher in men than in women.

Differences were less pronounced in the front-to-back direction, but still significant, with CoVvd predominantly located in the dorsal image half in both sexes, but more dorsally in men.

The researchers also calculated ventilation defect scores, a parameter that further defines the heterogeneity in ventilation distribution throughout the lungs. To calculate this score, they determined the fraction of ventilation in each image quadrant, and assigned a value of 0 (for a fraction of 0.15 or higher), 1 (between 0.10 and 0.15) or 2 (lower than 0.10) to each. Summing the four quadrant values gave the overall ventilation defect score.

They observed that ventilation distribution among quadrants was less heterogeneous in women than in men, with 83.4% of women exhibiting low ventilation defect scores (0 or 1), compared with 57.5% of men. A score higher than 1 was found in 42.5% of men but only in 16.6% of women.

Differences matter

The researchers conclude that ventilation distribution detected by EIT is affected by biological sex in this large group of quietly breathing male and female subjects, with the most striking variation the significantly higher right-to-left asymmetry seen in men than in women. They attribute this finding in part to differences in chest anatomy – the left ventricle is much larger in men than in women, even after normalization to body height.

Another possible cause may be breathing mechanics: differences in skeletal chest anatomy mean that women mainly perform thoracic breathing whereas men mainly exhibit abdominal breathing, which creates larger diaphragm motion. As EIT is sensitive to out-of-plane impedance changes, measurements may be affected by the movement of abdominal organs lying above and below the examination plane, particularly as this study used electrodes located at the lower part of the chest. But as the researchers did not monitor the type of breathing during the study, they cannot determine the exact role of this factor.

The findings reiterate the importance of accounting for biological gender when interpreting clinical chest EIT studies. The researchers point out the need for further clinical studies in lung-healthy subjects. “We need to perform large population studies to determine the reference values of several EIT measures for both sexes and also to clarify if sex differences exist when EIT examinations are performed in different body positions and chest sections,” Frerichs tells Physics World.

The post Lung images reveal how breathing distribution differs between the sexes appeared first on Physics World.

]]>
Research update Electrical impedance tomography data demonstrate that ventilation is more evenly distributed in women than in men https://physicsworld.com/wp-content/uploads/2024/07/16-07-24-PMEA-EIT-featured.jpg newsletter1
Scientists create space plasmas at CERN https://physicsworld.com/a/scientists-create-space-plasmas-at-cern/ Mon, 15 Jul 2024 12:00:49 +0000 https://physicsworld.com/?p=115529 Producing fast-moving "fireballs" in the lab could shed light on processes in extreme astrophysical emissions

The post Scientists create space plasmas at CERN appeared first on Physics World.

]]>
Diagram of the plasma production process

Plasma “fireballs” are omnipresent in deep space around black holes and neutron stars, but creating artificial versions here on Earth has proved difficult because of the high energies required. Physicists at CERN have now succeeded in generating such plasmas in the laboratory for the first time, using the nuclear research facility’s Super Proton Synchrotron (SPS) accelerator to create high-density beams of relativistic electron-positron pairs. The work could shed light on the extreme astrophysical emission processes that occur in gamma-ray bursts (GRBs) and active galactic nuclei (AGNs).

In general, a plasma is a gas so hot that some or all its component atoms are split into electrons and ions, which can then move independently of each other. However, in the extreme conditions around astrophysical bodies such as black holes and neutron stars, where accretion-powered jets and pulsar winds prevail, plasmas are instead made up of matter-antimatter pairs – electrons and positrons – moving at near-light speeds. These relativistic electron-position plasma beams also appear in blazars, which are AGNs that produce astrophysical jets directed towards the Earth.

The collective behaviour of these electron-positron pair plasmas is different from that of conventional electron-ion plasmas because of the symmetry between the matter and antimatter components, explains Charles Arrowsmith, a PhD student at the University of Oxford, UK, and the lead author of a Nature Communications paper on the work. “The role of these space plasmas is believed to be fundamental to explaining the emission from GRBs and the jets of AGNs, but until now numerical simulations were the only means to validate our theoretical models on the microscale,” Arrowsmith says. “Being able to perform controlled laboratory experiments realizes a decades-long pursuit with the promise of exciting science to come.”

Behaving like true astrophysical plasmas

A member of Gianluca Gregori’s group at Oxford, Arrowsmith worked with Dustin Froula and Daniel Haberberger of the Laboratory for Laser Energetics at the University of Rochester, US to make plasmas using the SPS’s 440 GeV/c beam. Their experiment involved 300 billion protons, each carrying a kinetic energy 400 times larger than its rest mass. “When a proton smashes an atom with such a large momentum, it has enough energy to release its internal constituents – quarks and gluons,” Arrowsmith explains. “This process produces a shower of particles (pions, kaons and other hadrons) that ultimately decay into electrons and positrons.”

The particle density of the electron-positron beams generated at the SPS is high enough for them to start behaving like true astrophysical plasmas, he adds. Indeed, the measured size of the pair beams exceeds that of the characteristic scales required for collective plasma behaviour to occur. According to Arrowsmith, such a beam “opens up an entirely new frontier in laboratory astrophysics by making it possible to experimentally probe the microphysics of GRBs or blazar jets.”

Such studies were previously impossible, he adds, because satellite- and ground-based telescopes cannot resolve the smallest details of distant environments such as GRBs or AGNs. “Our laboratory experiments will now be able to test microphysical models of the behaviour of these space plasmas and the emission processes therein,” he says.

Future experiments might even use these test systems to directly probe the physics of plasmas involving electron-positron pairs. “This will help shed new light on extreme astrophysical emission processes to address as-yet-unresolved questions,” Arrowsmith tells Physics World.

The post Scientists create space plasmas at CERN appeared first on Physics World.

]]>
Research update Producing fast-moving "fireballs" in the lab could shed light on processes in extreme astrophysical emissions https://physicsworld.com/wp-content/uploads/2024/07/15-07-2024-Plasma-fireball-image.jpg newsletter1
Constellation and Dark Matter: the TV series that could change your view of quantum mechanics https://physicsworld.com/a/constellation-and-dark-matter-the-tv-series-that-could-change-your-view-of-quantum-mechanics/ Mon, 15 Jul 2024 10:00:09 +0000 https://physicsworld.com/?p=115362 What can quantum multiple-world fiction teach us about identity, ask Robert P Crease and Jennifer Carter

The post <em>Constellation</em> and <em>Dark Matter</em>: the TV series that could change your view of quantum mechanics appeared first on Physics World.

]]>
“My understanding of identity has been shattered,” mulls the protagonist in Blake Crouch’s book Dark Matter (2016). “I am one facet of an infinitely faceted being called Jason Dessen who has made every possible choice and lived every life imaginable.”

Authors, poets, writers and film-makers have long exploited the notion of paths-not-taken as a narrative ploy. Early examples include Robert Frost’s poem “The Road Not Taken” (1920), H G Wells’s novel Men Like Gods (1922) and Jorge Borges’s short story “The Garden of Forking Paths” (1941).

But the “many-worlds” interpretation of quantum mechanics has turbocharged the genre, unleashing new possibilities for fiction about different choices, alternative lives and multiple worlds. Recent movies inspired by it include Another Earth (2011), Multiverse (2019), Loki (2021) and the award-winning Everything Everywhere All at Once (2022).

A fundamental principle of the many-worlds interpretation is that any contact between the different worlds is impossible. But a fundamental principle of popular culture is that it’s not, physics be damned. The beauty of using parallel worlds in fiction is that it can neatly exploit our human anxiety over the consequences of taking and having taken actions. In a sense, it reveals the God-like, world-shaping power of the human ability to choose and the depth of our innate desire to live our lives again.

As Brit Marling, co-author and star of Another Earth, told an interviewer: “Sometimes in science fiction you can get closer to the truth than if you had followed all the rules.”

Physics be damned

Quantum-inspired fictional worlds are back in the spotlight after featuring in two Apple TV+ dramas this year – Constellation and Dark Matter. Both use superposition as a device for allowing characters to take forking paths. The former was cancelled after one season, while the latter finished its season in June. The two shows illustrate what’s problematic about the genre.

In Constellation, characters feel and communicate with each other in different possible universes. The show highlights the uniqueness of the emotional ties we form and the joy or devastation we face when these links are severed or reconnected. It’s literally a haunting story where ghosts from other worlds alternately comfort and terrorize.

Still image from TV show Constellation

Dark Matter extracts somewhat more from superposition. Jason Dessen, a former physicist, has abandoned his brilliant career to spend more time with his wife, Daniela – who has also given up her career as an artist – and their child. In the alternate universe, where he did not give up his career, another Jason – let’s call him Alt-Jason – has used quantum superposition to create a “gateway to the multiverse” that “connects all possible worlds”.

Tired of fame and success, and his “intellectually stimulating but ultimately one-dimensional life”, Alt-Jason wants to take the “road not taken”. Using the gateway, he goes to Jason’s world, brutally beats Jason, sends him to Alt-Jason’s world, and assumes Jason’s role as husband and father. The book and series open with that switch; Jason has to figure out what’s happened and get back to “his” world and his family.

The characters in Dark Matter – the novel and the series – make predictable observations. Dessen, for instance remarks that “we’re a part of a much larger and stranger reality than we can possibly imagine”, and that “my identity isn’t binary…it’s multifaceted”. But the structure also makes possible imaginatively gripping scenes, such as Jason’s horrifying loneliness when he experiences seemingly insignificant things both familiar and unfamiliar, and a home that’s only “almost home”.

In one creepily intense scene, Daniela puzzles over the new quality of her love-making to the person she thinks is Jason but is actually Alt-Jason. We’re no longer like an “old married couple” but like “their first time every time”, she thinks. They smoulder with an intensity “that reminds her of the way new lovers stare into each other’s eyes when there’s still so much mystery and uncharted territory to discover”. It worries her, sort of.

Dark Matter – again, both the book and the TV series – give semi-explanations for the gateway. Thanks to quantum mechanics, scientists can put things in superposition to create worlds with an infinite number of possibilities. As the cliché goes: “Everything that can happen will happen.” People can enter superposition if they take a drug that prevents consciousness from destroying the superposition.

To enter superposition, they enter a box that uses the equivalent of what the show calls “noise-cancelling headphones” to block the intrusion of what would collapse the superposition. Once in superposition, they walk down a long corridor with an infinite number of doors leading to all possible outcomes. One’s frame of mind determines which world you enter.

The critical point

Previous Critical Point columns have provided a taxonomy of science-distorting art – “science bloopers” if you like (see columns from April 2007 and June 2007). Some distortions are well-meaning and create works that would be impossible otherwise, such as Mary Shelley’s Frankenstein. Others, however, are due to inattention or stupidity. Even the title of the 1968 movie Krakatoa, East of Java is wrong (Krakatoa is west).

So are fictional works based on quantum-travel-between-worlds just examples of “harmlessly enabling distortion” (HED, done for a good purpose)? Or should we think of them as examples of “fake artistic distortion” (FAD, done for special effects without caring how science works)? It’s an interesting question especially for philosophers, who have long worried about art having to appeal to its audience’s “sense” of reality, and its tendency to reinforce that sense despite its distortions.

In a similar way, the appeal of TV series based on many-worlds interpretations depends on how agreeably and acceptably they manipulate popular preconceptions about quantum mechanics, such as about time travel, alternate worlds, the reality of superposition, and – most of all – the illusion that the fundamental structure of the world is up to us.

But wouldn’t it be more artistic to portray a universe where quantum systems are what they are – in some cases coherent systems that can decohere, but not via thought control (as in Dark Matter)? If we did that, then artists could speculate about what it’d be like to meet and even trade places with other selves without introducing fake scientific justifications. We could then try to understand if and why we would want or benefit from such identity-swapping, on both a physical and emotional level.

That might really shatter and reconfigure what it means to be human.

Robert P Crease (click link below for full bio) is a professor in the Department of Philosophy, Stony Brook University, US, where Jennifer Carter is a lecturer in philosophy

The post <em>Constellation</em> and <em>Dark Matter</em>: the TV series that could change your view of quantum mechanics appeared first on Physics World.

]]>
Opinion and reviews What can quantum multiple-world fiction teach us about identity, ask Robert P Crease and Jennifer Carter https://physicsworld.com/wp-content/uploads/2024/07/2024-07-CP-TV-Dark_Matter.jpg newsletter
Second team uses laser to excite thorium-229 nuclear transition https://physicsworld.com/a/second-team-uses-laser-to-excite-thorium-229-nuclear-transition/ Sun, 14 Jul 2024 13:39:56 +0000 https://physicsworld.com/?p=115549 Rapid progress being made in development of nuclear clock

The post Second team uses laser to excite thorium-229 nuclear transition appeared first on Physics World.

]]>
Back in April of this year we reported that researchers in Germany and Austria were the first to use a laser to excite a low-lying metastable nuclear state of thorium-229. Now an independent team in the US has repeated the feat. Their work is seen as important progress in the development of a solid-state nuclear clock.

Such a device could rival today’s best atomic clocks in terms of accuracy. But unlike atomic clocks, a thorium-based nuclear clock could be a completely solid state device (the best atomic clocks use trapped atoms or ions cooled to cryogenic temperatures).

As a result, nuclear clocks could be much easier to operate outside of metrology labs, where they could find a wide range of applications including precision measurements of Earth’s gravitational field. What is more, because the frequency of such a clock is defined by nuclear forces it could be used to identify physics beyond the Standard Model of particle physics.

The idea of a thorium-229 nuclear clock was first proposed in 2003, but it proved very difficult to make accurate measurements of the frequency of the light involved in the clock transition – something that is key to the development of a clock.

This year, the research has accelerated and now Ricky Elwell of the University of California, Los Angeles and colleagues are the second group to use a laser to excite the clock transition in thorium-229 nuclei embedded in a crystal lattice. What is more, the precision of their measurement of the transition frequency is about an order of magnitude better than that of the German and Austrian team.

Ewell and colleagues report their measurements in Physical Review Letters and the science writer Rachel Berkowitz has written an accompanying piece in Physics. She points out that the team has found that the crystal appears to affect the transition – which could be important for the development of nuclear clocks.

The post Second team uses laser to excite thorium-229 nuclear transition appeared first on Physics World.

]]>
Blog Rapid progress being made in development of nuclear clock https://physicsworld.com/wp-content/uploads/2024/07/14-7-24-Thorium-clock.jpg
Spacesuit backpack allows astronauts to drink their own urine https://physicsworld.com/a/spacesuit-backpack-allows-astronauts-to-drink-their-own-urine/ Sat, 13 Jul 2024 09:00:22 +0000 https://physicsworld.com/?p=115523 Researchers at Cornell University have created a prototype urine collection and filtration system

The post Spacesuit backpack allows astronauts to drink their own urine appeared first on Physics World.

]]>
When a space-walking astronaut needs to relieve themselves they often have to do so in adult-style nappies inside their spacesuits. This is not only uncomfortable and unhygienic, but also wasteful too.

Researchers at Cornell University have now created a prototype urine collection and filtration system that allows astronauts to recycle their urine into, er, drinkable water (Frontiers in Space Technology doi:10.3389/frspt.2024.1391200).

The system can collect and purify about 500 ml of urine in five minutes. The urine is first collected via a collection cup made from moulded silicone that is lined with a nylon-spandex blend before being vacuum pumped to the urine filtration system where 87% of the liquid is recycled.

The purified water is then mixed with electrolytes and pumped into a drinks bag where it can be consumed.

“Astronauts currently have only one litre of water available in their in-suit drink bags,” notes Cornell’s Sofia Etlin. “This is insufficient for the planned, longer-lasting lunar spacewalks, which can last ten hours, and even up to 24 hours in an emergency.”

Yet at eight kilograms and the size of a backpack, it might need some miniaturization before it can be used by prospective Mars colonizers.

The post Spacesuit backpack allows astronauts to drink their own urine appeared first on Physics World.

]]>
Blog Researchers at Cornell University have created a prototype urine collection and filtration system https://physicsworld.com/wp-content/uploads/2024/07/Urine-backpack-2.jpg
Laura Tobin: the meteorologist and broadcaster who won’t stop talking about climate change https://physicsworld.com/a/laura-tobin-the-meteorologist-and-broadcaster-who-wont-stop-talking-about-climate-change/ Fri, 12 Jul 2024 09:00:36 +0000 https://physicsworld.com/?p=115205 The Good Morning Britain meteorologist says it’s important to be open to opportunity

The post Laura Tobin: the meteorologist and broadcaster who won’t stop talking about climate change appeared first on Physics World.

]]>
At the start of her career, Laura Tobin was adamant that she would never be a weather presenter. A trained meteorologist, she was sick of being asked “Are you going to be on TV?” as a joke, aware that it was a comment on her gender as much as her job. “They’re suggesting that you’re going to stand there, point at a screen, and not be credible,” she says.

Today, however, Tobin is a regular fixture on television screens across the UK. Since 2012, she has been a meteorologist and weather presenter for the broadcaster ITV. She says she is grateful she took a chance in her career: “You should never say never. It’s good to give something a go.”

Prepared for anything

Tobin’s career began with a degree in physics and meteorology at the University of Reading, which she completed in 2003. She actually failed the first year of her physics A level (the physics qualification she needed to go to university), but something  “clicked” in her second year; she fell in love with the subject and did well in her final exams. “It’s integral to know physics to be able to forecast the weather,” says Tobin. “You need to be able to model the atmosphere and understand how it moves. The atmosphere is essentially a fluid so large parts of my degree were fluid dynamics.“

After graduating she joined the MET Office – the UK’s national meteorological service – as a forecaster. Based in Cardiff, Wales, her work was used to produce local weather services including radio bulletins and forecasts for renewable energy generation, road gritting and hill walking conditions.

This meant that long before she worked in front of a camera, Tobin had to present her work in a way that anyone could understand, which she says was a challenge at first. “When you’re taught scientifically about the weather you have to change the way you speak,” she says.

In her next role, Tobin had to adapt her forecasts to a very different audience. She worked at the Brize Norton Royal Air Force base in Oxfordshire, UK, where she briefed pilots on the weather conditions and delivered reports for the British Forces Broadcasting Service. However, it took her a while to be accepted into the team: “They used to ask me really ridiculous questions. They used to try and catch me out because they wanted to see if I knew what I was talking about because I was a girl and I was young”.

Luckily, Tobin did know what she was talking about. In fact, she has taken a positive lesson from the experience and says she still always over-prepares for any questions that might come her way.

Never say never

When she had been at Brize Norton for five years, Tobin heard that the BBC, a UK public service broadcaster, was recruiting television weather presenters. Despite her earlier misgivings, she decided to give it a go.

When she saw firsthand what the job entailed, she was shocked, “I realized that I had a misconception of what a TV weather presenter was,” she says. The television meteorologists were skilled broadcasters who could deliver regular weather reports in multiple genres, but they also had to understand the science behind everything they said, and they had to be ready to comment on everything from hurricanes to NASA launches.

Tobin took the role and stayed at the BBC for four years before moving to ITV, where she now works on the breakfast programme Good Morning Britain. She has never looked back, but the transition to broadcasting wasn’t seamless. When she started at the BBC, her forecasts were prerecorded and she would often have to do many takes to get them right. She had scientific knowledge but effectively presenting what she knew on live television was a skill she had to learn on the job.

Every weekday morning, Tobin has just a few minutes to give viewers all the information they need about the day’s forecast. This can be a challenge, but she says it’s the most effective way to communicate, “I think if I spoke for longer than a minute on a climate report, I would lose people. You need to be succinct.”

A new mission

In Tobin’s early days as a television meteorologist she would occasionally report on an extreme weather event like record rainfall or temperature. These events have grown more and more frequent and now, as she points out, “they’re happening so often that you can’t report them all”. Today, viewers don’t just watch Tobin to decide whether to pack sunscreen or an umbrella, they look to her for credible information about the climate crisis.

In September 2021, Tobin travelled to Svalbard in the North Pole to report on the effects of climate change for ITV. In one video she stands on what was once the edge of a glacier and explains that in the last 40 years the ice has retreated by half a kilometre. Confronted firsthand with the effects of global warming, Tobin is visibly emotional as she delivers one report to viewers.

But Tobin isn’t all “doom and gloom” – she is passionate about raising awareness of the climate crisis because she believes the people who watch her on television can make a difference. Despite facing a backlash from climate change deniers, she is not deterred, and raising awareness of climate change is now Tobin’s mission: “I’d like to hope that I’m inspiring people to make a change.”

The post Laura Tobin: the meteorologist and broadcaster who won’t stop talking about climate change appeared first on Physics World.

]]>
Feature The Good Morning Britain meteorologist says it’s important to be open to opportunity https://physicsworld.com/wp-content/uploads/2024/06/2024-06-CAREERS-Laura-Tobin-headshot.jpg
Precision medicine: meet two medical physicists who are making it possible https://physicsworld.com/a/precision-medicine-meet-two-medical-physicists-who-are-making-it-possible/ Thu, 11 Jul 2024 14:19:29 +0000 https://physicsworld.com/?p=115520 Anna Barnes and Nicky Whilde explain how technological advances are enabling precision medicine

The post Precision medicine: meet two medical physicists who are making it possible appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast explores how medical physicists are using exciting new technologies to make precision medicine possible. Our guests are Anna Barnes, Director of the King’s Technology Evaluation Centre at Kings College London and President of IPEM, and Nicky Whilde, who is head of radiotherapy physics at the Mid and South Essex NHS Foundation Trust.

In a wide-ranging conversation with Physics World’s Tami Freeman, Whilde and Barnes define the key concepts of precision medicine and explain how they are being implemented by medical physicists using magnetic resonance imaging, radiotherapy and other technologies.

This episode is supported by PTW, the dosimetry specialist.

Courtesy: PTW

 

The post Precision medicine: meet two medical physicists who are making it possible appeared first on Physics World.

]]>
Podcasts Anna Barnes and Nicky Whilde explain how technological advances are enabling precision medicine https://physicsworld.com/wp-content/uploads/2024/07/11-07-24-Anna-and-Nicky-list.jpg
RadMachine unifies all machine QA and QC onto one streamlined platform https://physicsworld.com/a/radmachine-unifies-all-machine-qa-and-qc-onto-one-streamlined-platform/ Thu, 11 Jul 2024 10:00:55 +0000 https://physicsworld.com/?p=115491 RadMachine software enables radiotherapy clinics and diagnostic centres to manage their quality assurance tasks on a single cloud-based platform

The post RadMachine unifies all machine QA and QC onto one streamlined platform appeared first on Physics World.

]]>
When the Hôpital de la Tour in Switzerland recently restructured its radiation therapy services, it brought in a completely new medical physics team. These physicists were tasked with revising all of the hospital’s radiotherapy processes, including the machine quality assurance (QA), without interrupting clinical activities.

To deliver a seamless changeover, the team needed to find a comprehensive QA system that could effectively manage the department’s suite of radiotherapy treatment and simulation devices. Ideally, the software would be simple to use, fast to deploy, and able to perform all of the required QA and quality control (QC) tasks from a single platform.

The answer lay in RadMachine – a complete cloud-based QA platform from oncology software company Radformation. RadMachine provides machine QA for all therapeutic and imaging systems, as well as ancillary equipment, integrating all of the data into a simple, centralized hub. The software enables users to review multiple QA data streams at once and provides detailed reports to help track device performance over time.

“When we arrived in this centre, it was challenging because the previous physicists left before our arrival. We found lots of home-made QA solutions, but these were unusable without knowledge transfer, so we had to quickly implement a new QA system,” explains Jarno Bouveret, medical physicist at the Geneva-based Hôpital de la Tour. “We were looking for next-generation software and we found RadMachine, which we are now using. It has merged all of the QA into one platform, it’s really nice software.”

One of the first tasks was to perform the standard tests required to comply with Swiss legislation. “RadMachine includes all of these tests, and Radformation helped us to adapt it, so it was really fast and easy for us to implement,” says Bouveret. He notes that the software came preconfigured – he and his colleagues just needed to perform some verification checks before customizing the QA processes to meet their needs. “It’s simple in RadMachine to just select different tests and merge them into a common test list,” he explains.

The team has been using RadMachine for over a year now – for machine QA of the department’s two radiotherapy treatment devices, as well as its CT, PET/CT and MRI simulation systems. Bouveret points out that the software’s inherent automation has helped to simplify the daily QA workload. For example, users can create a checklist of QA validation tests that the therapists complete each day. If all the tests are passed, RadMachine validates the QA automatically; but if one fails or a query is flagged, the software alerts the physicist to check it further. Likewise, if the daily QA has not been performed at the usual time, RadMachine will send an email alert.

Bouveret also works within an imaging centre at a different hospital, where much of the QA is delegated to an external company that just sends over reports. He thinks that switching to RadMachine could enhance the QA process here too, by providing greater insight into erroneous measurements, for instance, and tracking QA results over time. “I will try to import it into this other hospital, to show how good it is and what benefits it could offer,” he adds.

Designed for imaging

Meanwhile, at the University Hospitals Health System in Cleveland, Ohio, medical physicists are already utilizing RadMachine within the centre’s diagnostic imaging department. “Currently, we’re using RadMachine to record and analyse our CT quality control data and our MRI quality control data,” says Nichole Harris, a diagnostic medical physicist. “It serves as an electronic QC record and it also helps us to maintain compliance with different regulatory organizations by having records readily available.”

Nichole Harris

Harris explains that the hospital selected RadMachine due to its flexibility, its cloud-based architecture, and the availability of single sign-on features for IT integration. Another selling point was the availability of Python and associated libraries for custom test scripting. While Harris and her team have not exploited this scripting facility yet, she emphasizes that it is “something we’re looking forward to working on in the future”.

With RadMachine in operation for approximately six months now, Harris says that one valuable feature that’s emerged is the software’s ability to analyse the test data in detail and look for trends. “We are also able to produce compliance reports for different regulatory organizations, and to really see the enterprise as a whole,” she says.

RadMachine also provides automated exception reporting. “We no longer have to check every machine, every day,” Harris explains. “Our end-users perform the daily QA tests, record the data, and then RadMachine will notify the physics group immediately if something needs attention.” This approach frees up the medical physicists to concentrate on the areas where they can make the most impact, such as correcting any anomalies and focusing on the areas required to maintain compliance.

In addition to the daily QA, the team uses RadMachine to approve quality control following major system repairs, where it helps to minimize clinical downtime. Harris adds that the software’s flexibility enables them to customize it for subtle differences between machines or manufacturer’s QA requirements. “It’s built for imaging, so it’s set up in a fashion that’s applicable to our task and strikes the right balance in terms of scalability and efficiency,” she says.

System support

When the UH Seidman Cancer Center first deployed RadMachine, the team at Radformation helped set up the software to work with their imaging systems, as well as helping Harris and her team to define the overall structure for implementation. “When there were questions about the logic or nuances of the system, they responded in a quick fashion to help us resolve these issues and get the system set up,” Harris explains. “They were quite the asset in terms of the single sign-on integration with our IT organization.”

Bouveret agrees that Radformation’s support team was a great help in implementing the new software. “I have had a lot of contact with them,” he says. “They have helped us to customize some image analysis to respond to the Swiss legislation. Every test we have asked for, they have created for us.”

Looking ahead, Bouveret says that the Hôpital de la Tour plans to begin new treatments such as radiosurgery, which will require the development of specific QA procedures. “We will be able to create our own tests by coding in Python and then we can just import it into the RadMachine software,” he tells Physics World. “So this will be really interesting for the future.”

  • Visit Radformation at the AAPM Annual Meeting, booth #807, to see a demonstration of RadMachine.

The post RadMachine unifies all machine QA and QC onto one streamlined platform appeared first on Physics World.

]]>
Analysis RadMachine software enables radiotherapy clinics and diagnostic centres to manage their quality assurance tasks on a single cloud-based platform https://physicsworld.com/wp-content/uploads/2024/07/RadMachine-featured.jpg newsletter
Quantum-entangled photons are super-sensitive to Earth’s rotation https://physicsworld.com/a/quantum-entangled-photons-are-super-sensitive-to-earths-rotation/ Thu, 11 Jul 2024 08:00:44 +0000 https://physicsworld.com/?p=115506 Sagnac interferometer experiment prepares the ground for testing the effects of gravity on quantum states

The post Quantum-entangled photons are super-sensitive to Earth’s rotation appeared first on Physics World.

]]>
Image showing the Earth, centred on Vienna, Austria, with an inset showing the interferometer and the photons entering it

A new experiment has measured the effect of Earth’s rotation on entangled states of light. The experiment, which features an optical fibre-based device called a Sagnac interferometer that its developers describe as the most sensitive ever built, paves the way for even more sensitive tests of gravitational effects on quantum objects.

In a Sagnac interferometer, beams of light travel around the same path, but in opposite directions. If the interferometer is at rest with respect to a non-rotating frame of reference, the travel time for each beam is the same, and recombining the two beams at a detector will produce an ordinary interference pattern. However, if the interferometer is rotating, the interference fringes will be shifted by an amount proportional to the angular velocity. The sensitivity of a Sagnac interferometer depends on the area defined by the paths, so with a large-enough interferometer, it becomes possible to measure even very small rotations with high precision.

Quantum entanglement is a phenomenon in which two or more particles become inextricably linked in ways that do not appear in classical physics. For example, if one photon in a polarization-entangled pair is measured and found to have horizontal polarization, we know immediately that the other photon must be vertically polarized, no matter how far apart they were when the measurement took place. This “spooky action at a distance”, as Albert Einstein called it, was once thought to be a quirky – or even nonsensical – aspect of the quantum world, but it is now a key part of quantum cryptography and quantum communications systems as well as quantum sensors.

The rationale for using entangled photons in a Sagnac interferometer is that they accumulate twice the time difference during their journey around the two paths compared to classical photons that are not entangled, explains Haocun Yu, a Marie Curie Postdoctoral Fellow at the University of Vienna, Austria and a member of the experimental team. “This is a unique property of multi-photon entanglement and is known as super-resolution,” Yu says. “By measuring this time difference, we are able to measure the effect of the rotation of Earth of these entangled particles.”

Keeping noise levels low and stable

The challenge with any quantum device is that entangled states are extremely fragile. Even the tiniest disturbance, or noise, in their environment can cause entangled particles to “decohere” (lose their quantum nature) through random interactions.

With interferometers, this challenge becomes more acute as the area of the device increases. Although larger Sagnac interferometers are better able to detect small rotations, any increase exposes the entangled photon pairs to additional noise.

Photo of the Sagnac interferometer, which consists of 2 kilometres of optical fibre wrapped around a 1.4 metre square aluminium frame. The frame and fibre are wrapped in white insulating material and mounted on a lab bench, slightly tilted from the vertical.

In the latest work, Yu, team leader Philip Walther and colleagues at Vienna and the Austrian Academy of Science constructed their interferometer by winding a 2-kilometre-long optical fibre around a 1.4 m x 1.4 m rotatable metal frame, giving the device an effective area of more than 700 m2. To keep noise levels low and stable, the researchers wrapped insulation around the fibre (mitigating fluctuations due to changes in temperature and air flow) and performed reference measurements to eliminate some sources of background noise. These measures enabled the device to detect enough high-quality photon pairs to boost its sensitivity by three orders of magnitude compared to previous quantum Sagnac interferometers.

Isolating and extracting the signal

Apart from noise, one of the main challenges the researchers faced was extracting the Earth’s rotation signal from their data. “This meant establishing a reference point for our measurement where light remains unaffected by Earth’s rotation effect,” explains Raffaele Silvestri, a PhD student at Vienna and the lead author of a Science Advances paper on the experiment.

Since they could not stop the Earth from spinning, the researchers devised a workaround, splitting the optical fibre into two equal-length coils and connecting them via an optical switch. “By toggling the switch on and off, we could effectively cancel the rotation signal at will,” Silvestri says. “We basically tricked the light into thinking that it’s in a non-rotating universe.”

Thanks to this approach, the researchers succeeded in measuring a rotation rate with a sensitivity of 5 μrad/s – the highest resolution ever achieved with an optical quantum interferometer.

The researchers say that detecting the Earth’s rotation is a milestone. “Its minute rotation rate, fixed direction and the inability to manipulate its behaviour make it particularly challenging to observe,” Yu says. “What is more, the ubiquitous presence of acoustic and seismic vibrations and thermal fluctuations directly transduce into noise for such a large apparatus.”

The new interferometer will now serve as a prototype for a larger device that the researchers will use to explore how quantum entanglement is influenced by gravitational potential. “Further improvements to our technique will also enable measurements of general-relativistic effects on entangled photons,” Walther says. “This will allow us to explore the interplay between quantum mechanics and general relativity, along with tests for fundamental physics.”

The post Quantum-entangled photons are super-sensitive to Earth’s rotation appeared first on Physics World.

]]>
Research update Sagnac interferometer experiment prepares the ground for testing the effects of gravity on quantum states https://physicsworld.com/wp-content/uploads/2024/07/Low-Res_20240614_Walther_Abb1.jpg newsletter1
Inverse Mpemba effect seen in a trapped-ion qubit https://physicsworld.com/a/inverse-mpemba-effect-seen-in-a-trapped-ion-qubit/ Wed, 10 Jul 2024 16:23:22 +0000 https://physicsworld.com/?p=115511 Quantum coherence and interference affect how qubits warm up

The post Inverse Mpemba effect seen in a trapped-ion qubit appeared first on Physics World.

]]>
The inverse Mpemba effect has been observed in a quantum bit (qubit). The research was done at the Weizmann Institute in Israel and suggests that under certain conditions a cooler trapped-ion qubit may heat up faster than a similar warmer qubit. The observation could have important implications for quantum computing because many leading qubit types must be maintained at cryogenic temperatures.

The Mpemba effect is the puzzling observation that hot water sometimes freezes faster than cold water. It was first recorded in antiquity and is named after Erasto Mpemba, who as a teenager in Tanzania in the 1960s sought an explanation for the effect – which he first encountered while making ice cream and then confirmed in a series of experiments. Despite the best efforts of physicists over the past six decades, the effect remains poorly understood.

Researchers have also observed the inverse Mpemba effect whereby a cold system heats up faster than a warm system. Theoretical and experimental studies have revealed a range of systems – magnetic, granular, quantum and more – that exhibit Mpemba effects.

Avoiding decoherence

Quantum Mpemba effects are of particular interest to people developing cryogenic qubits. These must be operated at very low temperatures to reduce noise, which destroys quantum calculations in a process called decoherence.

In a new experiment described in Physics Review Letters, Shahaf Aharony Shapira and colleagues observed an inverse Mpemba effect in a single trapped strontium-88 ion coupled to an external thermal bath. This low-temperature ion acted as a qubit that interacted with the thermal bath, causing a slow decoherence of its quantum state over time.

“Most studies are about the direct Mpemba effect, which is easier to understand if you think classically,” says Aharony Shapira.

She offers an intuitive description of the classical Mpemba effect. Imagine, she says, a double-well potential where one well is a global minimum – the system’s most stable state – and the other is a local minimum – a comparatively less stable state.

Uniform energy distribution

When a system is at a high temperature, its energy distribution is relatively uniform, allowing it to transition between the two wells more freely. At lower temperatures, the system’s energy distribution becomes much narrower, concentrating near the bottom of each of the wells.

If the system starts in the local minimum, higher-temperature systems have lower energy barriers between the two wells, allowing them to transition quicker to the global minimum as it cools down.

“However, the inverse effect that we saw has a different intuition,” says Aharony Shapira.

End-state shortcut

To simulate the thermal bath, the team used laser pulses to induce transitions between the qubit states and the higher energy states of the trapped ion. Eventually, the interaction between the thermal photons from the laser caused the qubit to decohere.

The path the system takes as it moves towards its end-state is known as its “relaxation path”. This path is governed by the system’s interactions with the bath and its intrinsic quantum properties, such as coherence and interference effects that can suppress or enhance certain relaxation modes.

Unlike in classical systems, the relaxation rates in quantum systems do not change linearly with temperature. For certain initial conditions, a colder qubit might have a relaxation path that allows it to bypass certain energy barriers more efficiently than a warmer qubit. This shortcut allows it to reach the higher temperature equilibrium state faster than the warmer qubit – which is what the researchers observed.

School bus analogy

Team member Yotam Shapira explains the observation using the analogy of a bus driver waiting for schoolchildren to disembark. The bus driver, he said, finishes work when the last child gets off and is therefore limited by the speed of the slowest child.

“What we saw is that we can find conditions where it’s like the slowest child didn’t show up that morning,” he says, “Now the transition is much faster.”

Hisao Hayakawa is a researcher from Kyoto University whose team observed the Mpemba effect in a quantum dot. He says that the mechanisms observed at Weizmann Institute were similar to those seen in previous experiments. However, he suggests that the research may provide more insights into finer control mechanisms for quantum computing systems.

“These experiments suggest that the speed control to reach a desired state in quantum computers might be possible if we know the physics of the quantum Mpemba effect after a quench,” he said. A quench refers to a sudden change in a quantum system’s conditions, such as its temperature or magnetic field.

The research could influence the design of large-scale, temperature-sensitive qubit systems. “Maybe not cooling the system as much as you can would be best in the future,” said Aharony Shapira, “You need to be sensitive to special modes that, like in our case, can heat up very fast.”

The post Inverse Mpemba effect seen in a trapped-ion qubit appeared first on Physics World.

]]>
Research update Quantum coherence and interference affect how qubits warm up https://physicsworld.com/wp-content/uploads/2024/07/10-7-24-gravity-and-particles-1754139353-Shutterstock_Evgenia-Fux.jpg newsletter1
A quarter of UK students say school physics teaching is poor https://physicsworld.com/a/a-quarter-of-uk-students-say-school-physics-teaching-is-poor/ Wed, 10 Jul 2024 14:00:33 +0000 https://physicsworld.com/?p=115480 A survey by the Ogden Trust finds that half of those that didn’t take physics post-16 say they did not enjoy the subject

The post A quarter of UK students say school physics teaching is poor appeared first on Physics World.

]]>
Almost half of students who quit physics at 16 in England say they did not enjoy the subject, with a quarter noting that teaching was poor. That’s according to a recent survey carried out by the Ogden Trust – a charity that seeks to encourage people to study physics above the age of 16. Most pupils who do carry on with physics post-16 are, however, satisfied with their teaching.

The trust surveyed more than 1000 undergraduate students at UK universities, roughly half of whom are doing science, technology, engineering or mathematics (STEM) degrees. Students who had taken physics A-level – about 20% of respondents – report largely positive experiences, with 86% saying it had been well taught. Some 83% of those students say their physics teachers had strong specialist knowledge.

A less positive picture emerges from the other 80% of undergraduates who never took A-level physics. Almost half of them say they did not enjoy the subject, with a quarter describing the teaching as poor. The difference could partly be due to self-selection: physics A-level students will have actively chosen the subject and therefore been more engaged.

Another reason is that A-level physics classes are usually small and more likely to be taught by specialist physics teachers. Students who carry on with physics therefore have a better experience of the subject than those who gave up at 16, hampered by the estimated shortage of 3500 physics teachers in England alone. “GCSE-level teachers will often be teaching out of their field without the confidence and subject knowledge to inspire the class and support a depth of understanding,” says Clare Harvey, chief executive of the Ogden Trust.

Career advantage

The survey does, however, find that physics is the top-rated subject at A-level for inclusivity and for teachers being effective subject advocates. Of the students who did not take physics post-16, only 12% cite a lack of inclusivity as the reason – the lowest percentage out of 10 barriers mentioned. Most participants, including those who did not study A-level physics, also understand the career advantages it offers, with two-thirds believing it improves students’ prospects.

“The survey helps to validate and reinforce our strategy to support teachers, providing coaching and mentoring to keep specialist physics teachers in the profession and providing subject knowledge continuing professional development and support to upskill teachers who have to teach physics when it is not their specialism,” adds Harvey. “Retaining and retraining teachers to enhance the teaching and learning of physics remains central to our strategy.”

The results of the survey tally with work carried out by the Institute of Physics (IOP), which publishes Physics World. It points out that the lack of specialist physics teachers particularly affects students from lower socio-economic groups, who are three times less likely to take physics A-level than those from higher-income groups. As a result, 70% of A-level physics students come from about 30% of schools – often in the wealthiest areas of the UK.

“This is an important piece of work from the Ogden Trust and chimes with our experience that the quality of physics teaching up to 16 is affecting students’ deep engagement with the subject and reducing their likelihood of choosing it for A-level,” says Louis Barson, director of science, innovation and skills at the IOP. “The gradual loss of specialist physics teachers and the deployment of out-of-field teachers to teach physics up to 16 has correlated with the decline in uptake at A-level.”

The IOP has called on UK governments to improve both the recruitment and retention of physics teachers, which sees nearly half of teachers leaving within the first five years, and to fully fund retraining programmes for out-of-field teachers. Barson adds that the IOP has developed programmes that support recruitment and the retraining of out-of-field teachers – often in partnership with the Ogden Trust. This year, the number of accepted places on a physics “Initial Teacher Education” courses is 70% up on last year.

The post A quarter of UK students say school physics teaching is poor appeared first on Physics World.

]]>
News A survey by the Ogden Trust finds that half of those that didn’t take physics post-16 say they did not enjoy the subject https://physicsworld.com/wp-content/uploads/2024/07/school-pupils-exam-769528084-Shutterstock_Monkey-Business-Images.jpg newsletter
Liquid–metal interfaces show large thermoelectric effect https://physicsworld.com/a/liquid-metal-interfaces-show-large-thermoelectric-effect/ Wed, 10 Jul 2024 08:00:00 +0000 https://physicsworld.com/?p=115497 Discovery could help us better understand Jupiter’s magnetic field as well as improve liquid–metal batteries

The post Liquid–metal interfaces show large thermoelectric effect appeared first on Physics World.

]]>
The thermoelectric effect is much stronger at the interface of two liquid metals than it is in solid–solid or solid–liquid systems. This discovery, from researchers at the Ecole Normale Supérieure (ENS) in Paris, France, could lead to improvements in batteries that contain liquid–metal interfaces, and might even enhance our understanding of Jupiter’s magnetic field.

In thermoelectric materials, the flow of heat from a cooler area to a warmer one can be harnessed to generate electricity via the thermoelectric effect, which converts temperature differences into an electric voltage. The effect is usually seen at solid interfaces between electrical conductors or semiconductors or between solids and liquids.

“Some very unusual behaviours”

In the latest work, however, a team led by ENS physicist Christophe Gissinger observed a thermoelectric effect between two metals, gallium and mercury, that are both liquids at 30 °C.

“The liquid–liquid nature of the interface between the two metals leads to some very unusual behaviours,” Gissinger explains. “First, the electric currents generated in our liquid experiment are 50 to 100 times more intense than expected in conventional solid systems. Second, the geometry of these currents is complex, featuring multiple loops and stagnation points, which have no equivalent in solid thermoelectricity.”

The researchers also found that when they applied a magnetic field to the interface, the field interacted with the thermoelectric current in a way that made the two liquids rotate in a circular pattern, but in opposite directions, at a speed of a few centimetres per second. This effect, which can be observed with the naked eye, could lead to a new way of pumping these liquids.

Experimental challenges

To perform the measurements, the team had to overcome two challenges. The first, says Gissinger, was controlling the temperature of the system. “For a convincing quantitative study, we needed a very large temperature gradient, of around 80°, but to a precision of less 0.3°, which required a powerful cooling/heating system,” Gissinger says.

The second challenge was measuring the thermoelectric effect. At just a few microvolts, the voltages the researchers measured were too low for conventional probes. This meant the team needed to create a new type of probe and perform a great deal of signal acquisition and processing on the experimental data. “Using metals that are liquid at room temperature was key because it greatly facilitated the installation of these elements,” Gissinger says.

From planets to batteries

As for practical implications of the discovery, Gissinger notes that the interior of the planet Jupiter also contains a liquid-liquid interface, between molecular hydrogen and metallic hydrogen. “Since the poles and the equator are generally not at the same temperature, you have here a configuration that is quite similar to the set-up in our experiment: a temperature gradient along an interface between two different conducting liquids,” Gissinger tells Physics World. “A thermoelectric current is generated at the interface between these two liquids that could interact with the planet’s radial magnetic field to generate complex zonal flows.”

The work could also lead to improvements in liquid metal batteries, which Gissinger describes as a highly promising technology for energy storage. These devices are similar to conventional batteries except that the anode and cathode are made from liquid metals separated by a liquid electrolyte. They are therefore based on a superposition of conducting fluids, similar to those described in the ENS study.  “We expect that by maintaining a temperature difference in different parts of the battery, a strong thermoelectric current will appear around its liquid interfaces,” Gissinger says. “With an appropriate applied magnetic field, we can thus agitate the electrodes (by thermoelectric pumping) and enhance the battery’s efficiency.”

The researchers, who report their work in PNAS, say they would now like to see whether they can amplify the new thermoelectric effect by trying out other types of conducting fluids.

The post Liquid–metal interfaces show large thermoelectric effect appeared first on Physics World.

]]>
Research update Discovery could help us better understand Jupiter’s magnetic field as well as improve liquid–metal batteries https://physicsworld.com/wp-content/uploads/2024/07/Low-Res_23-20704-1.jpg
Could athletes mimic basilisk lizards and turn water-running into an Olympic sport? https://physicsworld.com/a/could-athletes-mimic-basilisk-lizards-and-turn-water-running-into-an-olympic-sport/ Tue, 09 Jul 2024 10:00:12 +0000 https://physicsworld.com/?p=115332 Ahead of the 2024 Olympics, Nicole Sharp investigates nature’s most extraordinary sprinters

The post Could athletes mimic basilisk lizards and turn water-running into an Olympic sport? appeared first on Physics World.

]]>
The world’s best runners are gathering in Paris for the 2024 Summer Olympics. Sprinters vying for the title “world’s fastest” hope to chase records set by all-time greats such as Usain Bolt and Florence Griffith-Joyner, competing on a highly engineered athletic track. Across the Atlantic, however, a different type of sprinter is practising its craft daily, not on polymer-laced rubber, but on water.

Basilisk lizards – nicknamed “Jesus Christ lizards” for their ability to run on water – don’t run for accolades or titles; they’re just looking to escape predators. When threatened, these pint-sized powerhouses take a running start on land, then skitter across the water. At 100 grams, basilisk lizards are hardly heavyweights, but they’re much too heavy to be supported by surface tension.

The ability to run on water is one of the most impressive feats in the animal kingdom – a triumph of physics as much as biology. It’s a question that has intrigued researchers for many years, but there’s something else all good physicists will want to know: could humans moving at speed ever run on water?

Slap, stroke, recover

Biologist Tonia Hsieh was first struck by the basilisks’ water-running in her undergraduate class on herpetology – the study of amphibians and reptiles. She couldn’t stop thinking about their ability to seemingly defy the laws of physics, so she pursued a PhD at Harvard University in 1999 chasing these little reptiles’ superpower.

A few years before, two other Harvard researchers had studied the same problem, developing a mathematical model that was Hsieh’s starting point (Nature 380 340). Tom McMahon and Jim Glasheen had analysed videos of the lizards and showed that each step they take across the water can be broken down into three stages – slap, stroke and recovery (see “On your marks” image).

Sequence of images showing the slap, stroke, recovery cycle of a basilisk lizard on water

When the basilisk runs, its foot slaps the water’s surface, just like a human sprinter on a track. With every footfall, the runner drives their shoe into the track, and the track pushes back. That’s Newton’s third law: every action has an equal and opposite reaction.  The same holds for the basilisk. Each time the lizard’s foot hits the water, the liquid exerts an upward force. The larger the lizard’s foot and the faster it hits the water, the more upward force the slap generates.

Unlike a human sprinter, the basilisk is running on a yielding surface. When its foot dips into the water, the basilisk extends its leg like a swimmer’s arm, but it moves so fast that in the milliseconds before the water rushes in, an air-filled cavity forms above the foot. This is the stroke. During this phase the lizard’s foot experiences a lifting force proportional to the amount of water it moves.

In the final phase – recovery – the basilisk quickly pulls its foot up and lifts it for the next slap. Anyone who’s waded through knee-high water knows this isn’t easy. The basilisk’s foot must make it out before the water closes around it, otherwise it will be dragged down.

Glasheen and McMahon showed that thanks to the basilisk’s speed and large feet, the slap–stroke–recover sequence should generate enough upward thrust to support the lizard’s weight. For Hsieh, however, many questions remained. Staying above the water is only the first challenge: the basilisk also needs to move forward, and it needs to do this while balancing on the ever-changing surface of a liquid.

Like running on a mattress

To tackle these questions, Hsieh built a watery track for her runners using large aquarium tanks. A platform on either end gave her subjects solid starting and finishing lines. Hsieh stood by with a high-speed camera, poised to capture each run. She soon discovered, however, the challenges of running a scientific experiment on live, free-willed lizards.

“I had so many videos of them running across the water and then turning and running smack into the window,” she recalls. For useful data, she therefore had to wait until the lizards decided to run. Eventually, though, Hsieh was able to capture the basilisk foot’s speed and orientation in each phase. To calculate the forces, however, she needed to capture the motion of the water as well as the lizards.

Fluid dynamicists use a technique called particle image velocimetry (PIV) to measure the speed and direction (i.e. the velocity) of flow. For an analogy, think about late-afternoon sunlight slanting through a window. If it falls at a particular angle, the light will illuminate the dust particles in the room, revealing air currents that are otherwise invisible. PIV works the same way.

Hsieh filled her tank with 12-micron glass spheres that matched the water’s density. These tiny particles acted as tracers – just like dust – that followed the same path as the water. To illuminate them, she replaced sunlight with a 1 mm-thick laser sheet focused near the basilisk’s foot. By tracking the particles, Hsieh saw exactly how the lizards accomplished their gravity-defying sprint (PNAS 101 16784).

The lizards were using the water like a squishy starting block – propelling themselves forwards as well as upwards. During the slap, they did this by angling their feet slightly down, so they met the surface at an angle, and during the stroke they were pushing against the wall of the cavity.

But Hsieh’s biggest surprise was the strong side-to-side forces, ranging from 37–79% of the lizard’s body weight. The basilisks were throwing themselves from side-to-side like a wobbling toddler. “It never occurred to me,” says Hsieh, “that if you can produce enough force to stay on top of water, how are you going to maintain your balance? That’s a really major problem.”

Everything we take for granted about moving around in the world is actually really, really hard

Imagine sprinting on a thick foam mattress. With every step, you’ll wobble as you try to stay balanced. That, Hsieh realized, explained the basilisk’s ungainly sideways motion. “They’re basically tripping every single step, and they’re catching themselves. Everything we take for granted about moving around in the world is actually really, really hard.”

Water running is a young basilisk’s game. As Hsieh’s lizards grew, they became bigger and slower, struggling to support their weight – potentially bad news for aspiring human water runners. However, these lightweight critters don’t have the last word in running on water. In fact, nature’s most successful water runners outweigh the basilisk lizard by a factor of 10.

Water dancers

Western grebes and their near-doppelgängers, Clark’s grebes, are unassuming birds. They have slender, swan-like necks and black and white plumage. Their most striking feature is their red eyes. But behind this unremarkable exterior lurks a water-running powerhouse.

Biologist Glenna Clifton first saw the grebes in the early 2010s in a BBC documentary. At the time she was a PhD student at Harvard, and her animal-behaviour class was discussing mating displays. Like many birds, the western and Clark’s grebes begin their displays with mated pairs mirroring one another. A head shake, a riffle of a beak through plumage. Then the birds extend their long necks, lock eyes and rush (see “It takes two” image).

A pair of western grebes running across water

As a synchronized pair, the grebes rise out of the water, feet beating furiously, wings held stationary behind them. With heads proudly raised, the birds run up to 20 metres in just a few seconds. Having studied ballet since the age of three, Clifton was “really sparked and captivated” by this dance. But apart from previous work on basilisk lizards, she found no information on the water-running physics of grebes, so she set out to answer the question herself.

Grebes won’t run on water in a laboratory, so Clifton planned a field study to capture wild grebes rushing. In May 2012 she set out for Oregon’s Upper Klamath Lake with two high-speed cameras and two field assistants. They called themselves the Grebe Squad.

Each morning the group pitched up tents and arranged their equipment on a narrow spit of land between the highway barrier and the lake. Their experiment used two synchronized cameras, placed about 40 metres apart, to view the same birds from different angles. By placing a known object – in their case, a T-shaped calibration wand – in the field of view of both cameras, they could work out the sizes, angles and positions of the grebes (J. Exp. Biol. 218 1235).

This sounds straightforward, but it was anything but. The Grebe Squad spent days on the lake, scanning hundreds of birds for signs of an imminent rushing display. “We usually got about three to seven seconds of warning,” Clifton says, “because they would have a certain look in their eye. They would call to each other…with a certain kind of intensity.” With that scant warning, they coordinated over walkie-talkies, pointed both cameras, manually focused and collected 1.7 seconds of high-speed footage. Then they raced to get the calibration wand to the same spot the birds had been in before either camera moved. An errant elbow, a gust of wind or a sinking tripod would ruin the data.

“Grebes are, arguably, way stronger water-runners than basilisk lizards because they start from within the water,” Clifton explains. “Imagine treading water fast enough to get up out of it. It’s just crazy.” Synchronized swimmers and water-polo players would agree. Typically, a swimmer (or bird) is supported by buoyancy. A floating object displaces a water volume equal to the object’s weight, as described by Archimedes’ principle. In turn, the object feels an upward, buoyant force equal to the displaced water’s weight. As the grebe rises, it displaces less and less water, giving it less and less buoyant force.

Is there a balance between foot size and energy expended that would let humans run on water?

Without buoyancy holding it up, the grebe counters its weight the same way a basilisk does – by slapping its feet. And the Grebe Squad’s data revealed that grebes slap a lot. During rushing, Clifton says, “[grebes] take up to 20 steps per second, which is a really high stride rate for animals.” An Olympic sprinter, in contrast, takes about five steps per second.

To estimate the forces a grebe produces, Clifton dropped aluminium models of grebe feet into a laboratory water tank and measured the impact force. The grebes’ feet are proportionally bigger and they move faster than the basilisk’s. They produce stronger slaps capable of supporting 30–55% of the grebe’s mass, compared to only 18% for the lizard’s. If a basilisk lizard were scaled up to the mass of a grebe, its feet would still be 25% smaller in area than the grebe.

Larger feet push more water with each slap, but they also require more energy to accelerate and they generate more drag. Is there a balance between foot size and energy expended that would let humans run on water?

Could humans run on water?

Although it was built for basilisk lizards, the model developed by Glasheen and McMahon at Harvard also tells us what it takes for a human to run on water. The idea is simple: to run on water, the total impulse from a slap and stroke must be greater than the impulse needed to support the runner’s mass. Impulse is simply a force multiplied by the time over which a force is applied – in this case, the time between steps. With a little algebra and some simple assumptions, you can determine the slap velocity needed, given the runner’s mass and foot area, as well as the time between steps and the depth the foot reaches.

A man in a harness running in a pool of water, and a pair of flippers

In their original paper, Glasheen and McMahon calculate that an 80 kg human with an average foot size and a world-class sprinter’s stride rate would have to slap the water at a speed of nearly 30 metres per second to support themselves. Unfortunately, the power needed for a stroke at that speed is almost 15 times greater than a human’s maximum sustained output. In other words, no human can run on water – at least, not on Earth.

That’s the theory, but does it stack up in reality? To find out, in 2012 a group led by Alberto Minetti, a physiologist at the University of Milan, studied whether reduced gravity conditions would enable humans to run on water (PLOS One 7 e37300). Their volunteers wore a special harness that reduced their effective weight to a fraction of its Earth-normal amount, along with fins that made their feet as proportionately large as a basilisk’s (see “All in a day’s work” images). Then they attempted to run on the spot in a small, inflatable pool. The video footage is spectacular.

The test subjects look more like cyclists than runners. Their thighs pump up and down, churning the water into splashes higher than their heads. Their legs stroke to a depth a little over halfway to their knees, but it’s clear that they manage to support their reduced weight for the seven to eight seconds the researchers deemed a success.

The team found that everyone could water-run at 10% of Earth’s gravity, but as they adjusted the harness so that the effective gravitational force increased, fewer runners could keep up. At 16% of Earth’s gravity – roughly equivalent to the Moon’s gravity – most of the runners could support themselves. At 22% of Earth’s gravity – still less than that on Mars – only one subject could. The Martian edition of the Space Olympics is unlikely to include water-running.

An image of Saturn’s moon Titan

But water isn’t the only liquid found in our solar system. Titan, Saturn’s largest moon, has lakes and seas comparable to ours, and its gravitational acceleration is only 13.8% of Earth’s. (That’s a little less than our Moon’s.) Unlike Earth’s lakes, Titan’s are made of frigid liquid ethane and methane (see “Running on Titan” image).

So, could a human being – like current women’s 100 m world champion Sha’Carri Richardson – run on Titan’s lakes? Ethane – even at Titan’s 94 kelvin – is less dense than water, so it offers less impulse to runners. But Titan’s lighter gravity counters that.

At 45 kg, Richardson, who is representing the US in Paris, is petite but blisteringly fast. In the 2023 World Athletics Championships she won gold with a championship record of 10.65 seconds in the 100 m. Her UK size five shoes provide a good foot area. When sprinting (on land, admittedly), she takes ~4.6 steps a second. I’ll assume on ethane that she sinks about 8 centimetres – a bit less than the Italian water-runners – during each step.

To stay atop Titan’s ethane, Richardson would have to slap the surface at about 9.0 m/s. That slap would provide more than 60% of her necessary vertical impulse and require running at about 8.7 metres per second (31.2 kilometres per hour). Her world-championship time was significantly faster at 9.3 metres per second.

So quick dashes across Titan’s lakes are theoretically possible – at least for humanity’s fastest. Just make sure to dress warmly and maybe hold your breath.

  • For more from Nicole Sharp about the fluid dynamics of animals, listen to the July edition of the Physics World Stories podcast.

The post Could athletes mimic basilisk lizards and turn water-running into an Olympic sport? appeared first on Physics World.

]]>
Feature Ahead of the 2024 Olympics, Nicole Sharp investigates nature’s most extraordinary sprinters https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Sharp-Lizard.jpg
Matter-wave interferometry puts new limits on ‘chameleon particles’ https://physicsworld.com/a/matter-wave-interferometry-puts-new-limits-on-chameleon-particles/ Tue, 09 Jul 2024 07:30:40 +0000 https://physicsworld.com/?p=115469 Gravity measurement benefits from optical lattice

The post Matter-wave interferometry puts new limits on ‘chameleon particles’ appeared first on Physics World.

]]>
A matter-wave interferometer has measured the effect of gravity on individual atoms at the highest precision to date. That is according to its creators in the US and Italy, who built their instrument by combining matter-wave interferometry with the spatial control of optical lattices. The research was led by Cris Panda at the University of California, Berkeley and puts new constraints on some theories of dark energy involving “chameleon particles”.

Matter-wave interferometry is a powerful technique for probing fundamental physics. It takes advantage of wave–particle duality in quantum physics, which says that atoms behave as waves as well as particles.

“Lasers are used to split each atom in a quantum spatial superposition, such that each atom is effectively in two places at once,” Panda explains. “The two parts are then recombined and interfere either constructively, if the two parts are in-phase, or destructively, if they are out of phase.”

Search for new physics

When existing in two places, the phase of each component of the matter wave can be affected differently by external forces such as gravity. As a result, matter-wave interferometry can be used to make extremely precise measurement of the nature of these forces. This means that it can be used to search for deviations from the Standard Model of particle physics and Einstein’s general theory of relativity.

One fruitful area of investigation involves probing the gravitational force by placing a matter-wave interferometer next to a large mass. Atoms are split between locations at two different distances from the mass, allowing a comparison of the gravitational attraction between the atom and mass at two different places.

A shortcoming of such experiments, however, is that the atoms quickly fall out of place under Earth’s gravity, so making measurements longer than a few tens of milliseconds is difficult – limiting accuracy.

Atoms on hold

In 2019, the Berkeley team showed that optical lattices offer a solution by using lasers to hold atoms in position in Earth’s gravitational field. Earlier this year Panda and colleagues managed to hold atoms for 70 s in this way. Now, they have integrated a tungsten mass into their instrument.

Their latest experiment began with a gas of neutral caesium atoms that was cooled to near absolute zero in a magneto-optical trap. Some of the atoms were then transferred to a vertical optical lattice located just below a cylindrical tungsten mass that is about 25 mm in diameter and height (see figure).

A laser pulse was used to put the atoms into a superposition of two different micron-scale distances below the mass. There, they were held until a second pulse combined the superposition so that interference could be observed.

Information accumulation

“During the hold, the part of the atom in each location accumulates information about the local fields, particularly gravity, which can be read out at the end of the interferometer,” Panda explains. “The hold time can be many seconds and up to one minute, much longer than possible when atoms are falling under Earth’s gravity, which makes this device exceedingly sensitive.”

While the experiment did not reveal any deviations from Newton’s law of universal gravitation, it did allow the team to put new constraints on some theories of dark energy – which is a hypothetical form of energy that is invoked to explain the universe’s accelerating rate of expansion. Specifically, the team put limits on the possible existence of “chameleon particles”, which are dark energy candidates that couple to normal matter via gravity.

The team is also confident that their technique could have exciting implications for a wide range of research. “Our interferometer opens the way for further applications, such as searches for new theories of physics through precise measurements of fundamental constants,” Panda says. “It could also enable compact and practical quantum sensors: such as gravimeters, gyroscopes, gradiometers, or inertial sensors.”

The research is described in Nature.

The post Matter-wave interferometry puts new limits on ‘chameleon particles’ appeared first on Physics World.

]]>
Research update Gravity measurement benefits from optical lattice https://physicsworld.com/wp-content/uploads/2024/07/9-7-24-Optical-lattice-gravity-probe.jpg
‘Poor man’s Majoranas’ offer testbed for studying possible qubits https://physicsworld.com/a/poor-mans-majoranas-offer-testbed-for-studying-possible-qubits/ Mon, 08 Jul 2024 12:00:52 +0000 https://physicsworld.com/?p=115464 A new approach could put Majorana particles on track to become a novel qubit platform, but some scientists doubt the results’ validity

The post ‘Poor man’s Majoranas’ offer testbed for studying possible qubits appeared first on Physics World.

]]>
Majorana particles could be an important component in quantum computers – if they exist. Using a new approach, physicists at QuTech in the Netherlands say they have now found fresh hints that Majorana-type behaviour is possible. They have also devised an experimental testbed for studying the properties of these particles and determining whether they live up to expectations.

Quantum computers use phenomena such as superposition to solve problems that would be impossible for classical machines. However, the quantum mechanical states they use for their computations are fragile. These states are known as quantum bits, or qubits, and they easily decohere, meaning that they lose their quantum nature and thus their ability to perform calculations. This is true for all existing qubit platforms, including trapped ions, spin qubits, superconducting qubits, Rydberg atoms and others. The only way of avoiding decoherence is to keep these quantum systems extremely stable, which requires equipment that is bulky, expensive and complex.

One possible alternative is to make qubits from so-called Majorana bound states (MBSs). These states are quasiparticles that arise from collective effects in a superconducting system, and they are protected from decoherence by the system’s topology – for example, by being bound to opposite ends of a nanoscale wire. This stability could make it possible to perform quantum computations with fewer qubits, but MBSs are notoriously difficult to produce. Only a handful of labs have seen positive hints of their existence, and past claims about Majoranas have produced intense debate within the scientific community. Several once-promising results have become the subject of expressions of concern or retractions, including two Nature papers co-authored by researchers at QuTech (which is a collaboration between the Delft University of Technology and TNO, the Netherlands’ organization for applied science research).

A new route

In the latest work, which is also published in Naturea different team at QuTech produced MBSs by coupling two spin-polarized quantum dots in a semiconductor-superconductor hybrid material. The coupling occurs via so-called Andreev bound states, which can be seen as a superposition of electrons and holes. By controlling these states using magnetic fields and gate voltages, the researchers tuned the system to “sweet spots” that demonstrate correlated zero-bias conductance peaks (ZBPs), which are an important property of MBSs and are resilient in the face of local perturbations.

However, because these experiments use only two quantum dots, the MBSs that arise are not topologically protected. These states have therefore been nicknamed “poor man’s Majoranas”, and the QuTech researchers aim to use them as a platform to study how the protection of Majoranas evolves as the number of sites in a so-called Kitaev chain increases.

Key experiments

The researchers performed two key experiments to verify the Majorana-ness of their system. In the first, they isolated Majorana pairs from each other and showed that disturbing one leaves the other unaffected, demonstrating that the ZBPs are indeed protected from local disturbances. In the second, the team estimated the Majorana polarization, which is an important metric for the quality of MBSs and is key to using them in qubits, which require multiple MBSs.

Srijit Goswami, the QuTech physicist who led the latest study, says that realizing these Kitaev chains in a two-dimensional platform is an important step towards the systematic  study of MBSs. Disorder and impurities in these devices, he explains, sometimes create strong electrical fluctuations that would not be desirable in scalable qubits. Though these fluctuations do not undermine the physics of the experiment, Goswami acknowledges they need to be tackled before working towards more complex devices. “In order to realize a qubit, we must first understand the primary decoherence mechanisms for Majorana-based qubits,” he tells Physics World.

Debating Majoranas

Sankar Das Sarma, a theorist at the University of Maryland, US, who has collaborated with QuTech scientists in the past but was not involved in this work, describes the result as good for basic research, but emphasizes that an expansion to many-dot experiments will be necessary to create a workable technology. He also expresses some uncertainty as to whether the team’s approach – using quantum dots rather than nanowires, as previous experiments did – could lead to Majorana modes with exponential topological protection. The team’s biggest achievement, he says, is fabricating the sample and doing the experiment.

Henry Legg, a theorist at the University of Basel, Switzerland, who was likewise not involved in this research, says it is nice to see new device fabrication on new platforms. However, he suggests that the team may, in fact, have observed other, less interesting states that resemble Majorana bound states (MBSs), but lack the necessary topological protection. Such states would not have any advantage over other platforms such as superconducting or spin qubits, he explains, adding that the result has no proven relevance to the ultimate goal of a true MBS. The main challenge in this field, he says, is to overcome the impact of disorder, and it is not clear that producing a true Kitaev chain from these building blocks will do that.

Two critics of previous QuTech research on Majoranas are also not convinced. Vincent Mourik of Germany’s Forschungszentrum Jülich points out that the retracted QuTech Majorana work contained undisclosed data manipulations, including “inappropriately deleted or cropped” data in published figures. Given these earlier practices, and the similar topic, Mourik says he hoped that QuTech would adopt a policy of sharing the full data of a project upon publication. In the latest work, however, he notes that Goswami and colleagues only shared data corresponding to the published figures, not the full dataset.

Sergey Frolov, a physicist at the University of Pittsburgh, US, offers a similarly harsh judgement. “This paper, like the other two papers from Delft in Nature on this, do not report Majorana, present data in a very narrow way and do not advance quantum computing,” says Frolov, who co-organized an international conference on reproducibility in condensed-matter physics earlier this year. “Once we get the old data out of them, we will request this ‘poor man Majorana’ data as well,” he adds. “I think there will be things to find in the full data…Until they undergo a full external investigation, none of their results should be taken seriously.”

In response, the authors of the latest QuTech paper say that there is “no methodological connection” between this work and the work described in the previous retracted papers. “These experiments were conducted in a different research group with a different set of researchers,” they continue. “We find it deeply concerning that [Mourik and Frolov] place us, as authors of this new publication, under collective suspicion.”

The authors also state that QuTech’s data management policy “goes further than the policy of other research institutions” as QuTech publishes “at least all raw data underlying published figures, as well as the processing scripts. This was also the case for the current publication.”

  • This article was amended on 11 July to include a response from the authors

The post ‘Poor man’s Majoranas’ offer testbed for studying possible qubits appeared first on Physics World.

]]>
Research update A new approach could put Majorana particles on track to become a novel qubit platform, but some scientists doubt the results’ validity https://physicsworld.com/wp-content/uploads/2024/07/08-07-2024-Majorana-nanodevice_web.png
Green challenge: can the shipping industry clean up its act? https://physicsworld.com/a/green-challenge-can-the-shipping-industry-clean-up-its-act/ Mon, 08 Jul 2024 10:00:59 +0000 https://physicsworld.com/?p=115184 James McKenzie reports on the launch of new ships that will help the world to cut greenhouse-gas emissions

The post Green challenge: can the shipping industry clean up its act? appeared first on Physics World.

]]>
Back in 2022 when I wrote about new developments in wind-powered shipping, my article attracted quite a lot of comments. Ships are big polluters and I described some of the amazing innovations using wind to power such vessels. But some people seemed to think wind-powered shipping was a retrograde step if we want to make shipping greener.

Many of the objectors repeated the memes that prompted me to write that original article in the first place – essentially mocking the notion that wind-powered ships could possibly be the solution. But shipping is cleaning up its act, with plenty of recent developments. Most exciting of all is that several novel “clean” ships have been launched in the last few months.

Shipping is the lifeblood of the global economy, with about 90% of trade being seaborne. More than 90,000 ships crossed the oceans in 2018, burning two billion barrels of the dirtiest fuel oil. Making up 2–3% of global greenhouse-gas emissions, they also belch out sulphur dioxide, nitrogen oxides and particulate matter, endangering human health, especially along key shipping routes.

But shipping – like aviation – isn’t covered by the Paris Agreement on climate change, which seeks to limit the global temperature rise to 2 °C this century via emissions reductions. Instead, it is up to International Maritime Organization (IMO) – an industry body – to negotiate cuts. It wants to halve emissions from the sector by 2050, which would otherwise rise six-fold by then if nothing were done.

Most ships today still burn bunker fuel – a “heavy” oil that’s mostly what’s left over after crude oil is refined. Literally, it’s the dregs: when burned, it spews out about 3500 times as much sulphur as road diesel fuels. Since 2019, however, the IMO has said that all ships over 5000 tonnes – which emit 85% of all maritime greenhouse-gases – must collect fuel-oil consumption data.

In 2020 the IMO also banned the sale of high-sulphur fuels. In the past, such fuels were allowed to contain as much as 3.5% sulphur. But that limit has now been capped to 0.5%. Ships must also use “scrubbers” to clean up exhaust gases. These mandatory measures from the IMO are a good first step, but they’re not going to solve the problem long term or let us hit targets.

It’s a gas

So how can we cut emissions from ships? It’s a complex question that depends on the energy density of a fuel, how much space you have on a ship, and how far it needs to travel on a regular basis. One of the main contenders is liquid natural gas (LNG), which has an energy density of 55 MJ/kg compared to 45 MJ/kg for heavy oil.

Many see liquid natural gas as the only realistic short-term solution if we are to cut shipping emissions by 40% by 2030

Indeed, many see LNG as the only realistic short-term solution if we are to meet the IMO’s interim target, which it introduced last year, of cutting emissions by 40% by 2030. LNG is 25% less carbon intensive than heavy oils and doesn’t emit as much nitrogen and sulphur oxide. It’s also a mature technology, with many ships already running on LNG.

Green hydrogen, which has energy density of 120 MJ/kg, is a longer-term goal but economics are currently not on its side. Ammonia (18.5 MJ/kg) is another contender even if it’s toxic and, like hydrogen, has to be stored under high pressure. Still, the shipping industry is used to moving it around the globe as it’s a feed stock for many industrial processes and a good method of storing hydrogen.

Batteries, though, aren’t realistic for many applications. Apart from their tiny energy density (0.4 MJ/kg) you’d need loads of room on a ship taking up valuable space if range is required. Nuclear is not considered an option either. Despite its massive fuel energy densities (79,390,000 MJ/kg), it has many restrictions, huge operational costs and lots of geopolitical issues at play.

There are now over 469 LNG-powered ships in operation, according to the latest data in the Alternative Fuels Insight (AFI) platform from risk-management experts DNV. Whilst this is still in the “burning things to make things move” category, LNG is still much better environmentally than bunker fuel and a step in the right direction. Taking the current order book into account, DNV reckons there will be more than 1000 LNG ships by 2027.

Batteries, ammonia and hydrogen

Among other recent developments, we’ve seen two battery-powered container ships completed last year by Shanghai-based COSCO Shipping. Battery prices continue to fall due to volume production and the new ships use swappable shipping-container-sized batteries, which is a neat idea. The two ships were built by CHI in Yangzhou and are the world’s first 700TEU electric container ships.

In what will make SI unit purists squirm, a TEU (or twenty-foot equivalent unit) is a measure of volume, where 1 TEU is a container 20 feet (6.1 m) long. Large container ships can typically transport more than 18,000 TEU, while some can even carry more than 21,000 TEU. The COSCO ships use battery-powered electric motors that can move a ship at a top speed of 19.4 km/h, with each battery having a total capacity of over 50,000 kWh.

The Samskip SeaShuttle

COSCO expects each such ship could reduce carbon dioxide emissions by almost 3000 tonnes per year. Named COSCO Shipping Green Water 01 and COSCO Shipping Green Water 02, respectively, it has been reported that the former will be used in the Yangtze River shipping goods from Jiangsu to Shanghai.

Work is still progressing on ammonia-powered fuel-cells for ships. Although conventional fuel oil has a higher energy density, ammonia is easily stored in bulk as a liquid at modest pressures (10–15 bar) or refrigerated to –33 °C. Ammonia also benefits from an existing near-global distribution network, in which it’s stored in large, refrigerated tanks and transported around the world by pipes, road tankers and ships.

The world’s first clean, ammonia-powered contained ship was announced in November 2023

The world’s first clean, ammonia-powered container ship was announced in November 2023 in a joint venture between Yara Clean Ammonia, North Sea Container Line, and Yara International. A 1400 TEU ship named Yara Eyde, the vessel will – when it starts operation in 2026 – be the first to sail emission-free on a route between Norway and Germany.

As for hydrogen, 2021 saw the launch of the MF Hydra – the first commercial passenger and car ferry running on liquid hydrogen. Powered by two 200 kW fuel cell modules, the ship – built by Norled AS – can carry up to 295 passengers, eight crew members and 80 vehicles. It currently sails on a triangular route between Hjelmeland, Skipavik and Nesvik. In fact, Norway has ruled that all ferries, tourist boats and cruise ships travelling on its World Heritage fjords must be zero-emission by 2026.

Another company at the forefront of developments is Samskip, which was awarded a contract worth NKr149m ($14m) from Norway’s government to develop two SeaShuttle hydrogen fuel-cell vessels in 2022. The company claims that the pair will be the first green hydrogen-powered container vessels of their size for short-sea journeys. The 135 m-long 500 TEU vessels run on 3.2 MW fuel cells and will also have diesel backup generator sets. Delivery is due next year.

Green future

The shipping industry is building up valuable experience in these new technologies and does not seem to be shying away from the challenge. But will these technologies scale up, how long will it take – and at what cost? Those are the big questions given the huge number of vessels currently sailing on the world’s oceans and the fact that they still have very long operational lives.

It will take a long time to build new ships or refurbish existing vessels to make them greener. But the direction of travel for shipping is becoming clearer.

The post Green challenge: can the shipping industry clean up its act? appeared first on Physics World.

]]>
Opinion and reviews James McKenzie reports on the launch of new ships that will help the world to cut greenhouse-gas emissions https://physicsworld.com/wp-content/uploads/2024/06/2024-06-Transactions-Green-Ships_hydra.jpg newsletter
Low-frequency ultrasound triggers targeted drug delivery https://physicsworld.com/a/low-frequency-ultrasound-triggers-targeted-drug-delivery/ Mon, 08 Jul 2024 08:30:50 +0000 https://physicsworld.com/?p=115422 Targeted release of drugs from ultrasound-sensitive nanocarriers could minimize side-effects associated with conventional medication

The post Low-frequency ultrasound triggers targeted drug delivery appeared first on Physics World.

]]>
Conventional medication is often limited by low drug effectiveness or intolerable side effects caused by the drug reaching parts of the body where it’s not needed. As such, there’s increasing interest in developing methods for targeted drug delivery, to increase the efficacy and safety of pharmacological treatments.

One promising approach for localized drug delivery lies in the use of low-intensity ultrasound as a safe and practical way to trigger targeted drug release from circulating nanocarriers. Previous investigations using perfluorocarbon (PFC) nanodroplets as the drug carriers, however, have proved either effective or safe, but not both.

In their latest study, researchers from the University of Utah have shown that such PFC nanodroplets can be activated using low-frequency ultrasound, providing effective drug release with a favourable safety profile. Writing in Frontiers in Molecular Biosciences, the team also describes an optimized method for reliable manufacture of the nanocarriers, to help in the clinical translation of this approach.

“Delivery of medication into specific parts of the body has been a dream of medicine,” says Jan Kubanek, the study’s senior author. “This prevents the problems associated with taking drugs in the common way, which affects the entire body and all organs, often triggering substantial side effects.”

Drug release mechanisms

The nanocarriers comprise polymeric nanodroplet shells, of roughly 500 nm in diameter, filled with inert PFC cores mixed with a drug. When exposed to ultrasound, the PFC cores expand, stretching the droplet’s shell and releasing the encapsulated drug. This expansion arises due to either the mechanical or the thermal effects of ultrasound, with high-frequency ultrasound accentuating thermal mechanisms and low-frequencies causing mechanical disruption.

Most studies to date have investigated nanodroplets with PFC boiling points below body temperature, activated by high-frequency (1 MHz or greater) ultrasound. These low-boiling point PFCs, however, run the risk of spontaneous drug release – having been observed to more than double in size after 24 h at room temperature. High-boiling point PFCs are safer and more stable but to date have not demonstrated effective drug release, likely because they are less affected by the thermal mechanisms associated with high-frequency ultrasound.

To identify suitable ultrasound parameters to trigger drug release from high-boiling point nanodroplets, the team examined three different PFC cores – perfluoropentane (PFP, with a boiling point of 29°C), decafluoropentane (DFP, 55°C) and perfluorooctylbromide (PFOB, 142°C) – loaded with the anaesthetic drug propofol.

The researchers measured the level of drug released into organic solvent upon exposing the nanodroplets to either low-frequency (300 kHz) or high-frequency (900 kHz) ultrasound. At 900 kHz, they observed an increase in drug release with decreasing PFC boiling point, supporting the hypothesis of a thermal mechanism. They also saw a quadratic dependence of drug release on ultrasound pressure, in accordance with the fact that ultrasound-delivered thermal energy is proportional to pressure squared.

Propofol release versus PFC core boiling point

In contrast, sonication at 300 kHz produced a linear dependence of drug release on ultrasound pressure, consistent with a mechanical rather than a thermal effect. The researchers also found that for all three PFC cores, 300 kHz ultrasound released a greater percentage of drug (an average of 31.7%) than 900 kHz (20.3%).

These findings suggest that low-frequency ultrasound could open the path to using stable, high-boiling point cores such as PFOB as release actuators. The mechanical nature of drug release at low frequencies could also be of benefit for clinical translation, as it does not induce any heating of the target tissue.

“We developed a method to produce stable nanocarriers repeatably, and identified ultrasound parameters that can activate them,” says first author Matthew Wilson in a press statement.

Safe and sound

Kubanek and colleagues evaluated the safety of the high-boiling point PFOB-based nanodroplets in the brain of a macaque monkey. They delivered six doses at one-week intervals and analysed blood samples for markers of toxicity to the liver, kidney or spleen, as well as assessing immune system activation.

The blood biomarkers showed that the nanodroplets were well tolerated, with no detectable side-effects, an important factor for potential clinical translation. Only blood glucose showed a significant change following administration, which may have been caused by the animal’s diet.

The team also demonstrated that PFOB nanodroplets loaded with the anaesthetic ketamine or an immunosuppressant drug exhibited a similar sensitivity to ultrasound as propofol-loaded nanodroplets. “We are now testing the release of other psychoactive drugs, including ketamine, in non-human primates,” says Kubanek. “If we could release these locally, we would avoid the commonly observed side effects and thus open these emerging treatments of mental disorders to many more people.”

The researchers have now begun investigating the delivery of anti-cancer drugs into the brains of mice with glioblastoma. Kubanek notes that the nanocarriers could also be used to target drugs to tumours elsewhere in the body. “Through these developments in rodents and non-human primates, we are aiming to bring the method to humans as soon as possible,” he tells Physics World.

The post Low-frequency ultrasound triggers targeted drug delivery appeared first on Physics World.

]]>
Research update Targeted release of drugs from ultrasound-sensitive nanocarriers could minimize side-effects associated with conventional medication https://physicsworld.com/wp-content/uploads/2024/07/8-07-24-nanocarriers-schematic.jpg
Clams and algae collaborate to harvest sunlight very efficiently https://physicsworld.com/a/clams-and-algae-collaborate-to-harvest-sunlight-very-efficiently/ Sun, 07 Jul 2024 14:05:52 +0000 https://physicsworld.com/?p=115453 Symbiotic relationship is explored by physicists

The post Clams and algae collaborate to harvest sunlight very efficiently appeared first on Physics World.

]]>
I think it’s safe to say that photosynthesis is one of the most important biochemical processes at work here on Earth – and possibly elsewhere in the universe. I know that it is chemistry, but physicists have long had a fascination with the process. They include Yale University’s Amanda Holt and Alison Sweeney, who have teamed up with the marine scientist Lincoln Rehm of Drexel University to study how a symbiotic relationship between giant clams and algae makes extremely efficient use of sunlight.

The photosynthesizing algae reside on the mantle of the clam. This is a cloak-like structure between the two shells of the clam. The algae produce organic molecules that provide energy to the clam. In return, the clam uses its filter-feeding system to provide the algae with nitrogen and other nutrients.

What makes this relationship interesting is how efficient it is at converting sunlight into chemical energy. At the cellular level, photosynthesis is nearly 100% efficient in converting sunlight. But when it occurs in a large-scale system like a field of crops or an ecosystem, this efficiency drops to about 3%. In contrast, giant clam/algae systems are known to achieve efficiencies as high as 67%. Now, Holt, Sweeney and Rehm may have discovered why.

Vertical pillars

Within the mantle, the algae cells are arranged in vertical pillars that are about 0.1 mm in diameter. The pillars sit below a thin translucent membrane, which the trio had previously shown scatters sunlight into the horizontal plane – thereby illuminating the sides of the pillars.

New simulations and calculations done by the researchers suggest that this pillar design should achieve a conversion efficiency of about 43% in bright tropical sunlight. Eager to understand why their analysis came up short of the measured value of 67%, the trio then looked at how the clam changes the spacing between the pillars as light levels change throughout the day. By integrating these dynamics into their model, they were able to reproduce the 67% figure.

The scientists report their findings in Physical Review D. You can read an accompanying description of the research in Physics, in which Mark Buchanan also explains how the research could boost our understanding of other living systems that harvest sunlight – and could lead to better solar cells.

If this work has piqued your interest in the physics of photosynthesis, check out “Is photosynthesis quantum-ish?”, by the science writer Philip Ball. This feature article looks at the debate surrounding the role that quantum physics plays in photosynthesis.

The post Clams and algae collaborate to harvest sunlight very efficiently appeared first on Physics World.

]]>
Blog Symbiotic relationship is explored by physicists https://physicsworld.com/wp-content/uploads/2024/07/8-7-24-giant-clam.jpg
Can you solve this pistachio packing problem? https://physicsworld.com/a/can-you-solve-this-pistachio-packing-problem/ Sat, 06 Jul 2024 09:00:03 +0000 https://physicsworld.com/?p=115434 Given a full bowl of pistachios, what size container do you need for the leftover shells?

The post Can you solve this pistachio packing problem? appeared first on Physics World.

]]>
It sounds like a question you might get in an exam: given a full bowl of N pistachios, what size container do you need for the leftover 2N non-edible shells?

That tasty problem has now been examined by physicists Ruben Zakine and Michael Benzaquen from École Polytechnique in Palaiseau, France.

The issue of how the pistachio shells pack turns out to be a tough nut to crack given that pistachios come in different shapes and sizes with the shells being non-symmetric.

Pistachios are usually served in a bowl as a snack along with another bowl or container in which to place the discarded shells.

Carrying out experiments by placing 613 pistachios in a two litre cylinder, they found, in a nutshell, that the container holding the shells needs to be just over half the size of the original pistachio bowl (for well packed) or three-quarters (for loosely packed).

Zakine and Benzaquen say that numerical simulations on packing pistachios shells could be carried out to compare with the experimental findings.

They also say that the work extends beyond just nuts. “Our analysis can be relevant in other situations, for instance to determine the optimal container needed [for] mussel or oyster shells after a Pantagruelian seafood diner,” they write.

The post Can you solve this pistachio packing problem? appeared first on Physics World.

]]>
Blog Given a full bowl of pistachios, what size container do you need for the leftover shells? https://physicsworld.com/wp-content/uploads/2024/07/pistachios-in-bowl-2227090641-Shutterstock_everydayplus.jpg newsletter
LEGO creates ‘space bricks’ made from meteorite dust https://physicsworld.com/a/lego-creates-space-bricks-made-from-meteorite-dust/ Fri, 05 Jul 2024 14:44:25 +0000 https://physicsworld.com/?p=115445 LEGO has teamed up with the European Space Agency to create the bricks using material from a 4.5 billion-year-old meteorite

The post LEGO creates ‘space bricks’ made from meteorite dust appeared first on Physics World.

]]>
LEGO has teamed up with the European Space Agency (ESA) to create several “space bricks” made from a 4.5 billion-year-old meteorite.

The particular meteorite was discovered in north-west Africa in 2000 and is a “brecciated stone” that contains large metal grains, chondrules and other stone meteorite elements.

By mixing meteorite dust with polylactide and a bit of “regolith simulant”, the team was able to 3D-print bricks that mimic and behave just like LEGO bricks.

The idea behind the initiative is to test how material on the surface of the Moon, known as the lunar regolith, could be used as a future building material.

“No-one has ever built a structure on the Moon, so we have to work out not only how we build them but what we build them out of, as we can’t take any materials with us,” notes ESA science officer Aidan Cowley, adding that while the bricks are a “little rougher” than usual, the results are “amazing”.

Fifteen bricks will go on display at the LEGO House in Billund, Denmark, as well as at selected LEGO stores including Leicester Square in the UK and 5th Avenue in New York.

The meteorite bricks apparently click and snap together just like normal LEGO bricks but they unfortunately only come in one colour – space grey.

The post LEGO creates ‘space bricks’ made from meteorite dust appeared first on Physics World.

]]>
Blog LEGO has teamed up with the European Space Agency to create the bricks using material from a 4.5 billion-year-old meteorite https://physicsworld.com/wp-content/uploads/2024/07/07-07-24-LEGO-space-brick.jpg
Charmonium’s onion-like structure is revealed by new calculations https://physicsworld.com/a/charmoniums-onion-like-structure-is-revealed-by-new-calculations/ Fri, 05 Jul 2024 12:20:14 +0000 https://physicsworld.com/?p=115436 Prediction could be tested by upcoming Electron-Ion Collider

The post Charmonium’s onion-like structure is revealed by new calculations appeared first on Physics World.

]]>
Calculations by physicists in China and the US suggest that the charmonium meson has a distinct onion-like structure. The team used a simplified version of quantum chromodynamics (QCD) and extensive computer simulations show that the subatomic particle comprises nested layers, each containing different types of constituent particles.

“Charmonium is made [primarily] of a pair of charm quark and antiquark, and is one of the simplest hadrons,” explains Xingbo Zhao of the Institute of Modern Physics and University of Chinese Academy of Sciences, who is one of the team members. “Its constituent quarks are very massive compared to the [up and down] quarks that make up the proton and the neutron,” explains Zhao.

“It is sometimes dubbed as the hydrogen atom within quantum chromodynamics,” he adds. This is because, like the hydrogen atom, it is a fundamental, relatively simple system that provides deep insights into the underlying forces and interactions that define its properties.

Strong interaction

The strong interaction bind quarks into composite objects called hadrons (which include mesons like charmonium) through the exchange of gluons. This is similar to how electromagnetic interactions between charged particles are mediated by photons, the quanta of the electromagnetic field.

However, QCD – the theory of strong interactions – is significantly more complex than quantum electrodynamics – which is the theory describing the electromagnetic interaction. This complexity makes accurate calculations of the properties of hadrons extremely challenging. Complexity has also led researchers to use simplified versions of QCD, whose validity must be confirmed by comparing their predictions with experimental data.

In their recent study, Zhao and colleagues examined charmonium using a simplified theory, where the number of constituent particle states is artificially limited and not all the intricate aspects of quark interactions are considered.

This approach, known as basis light-front quantization, allowed the team to numerically compute the distribution of mass and pressure inside charmonium. In the mathematical machinery of QCD, these distributions are encoded in gravitational form factors.

Energy and force distributions

“Gravitational form factors describe how a hadron, such as the proton, interacts with a graviton, the quantum of gravity,” said Zhao. “They are interesting because they encode the energy and force distributions inside the hadron.”

The knowledge of these form factors allowed the researchers to deduce the structure of charmonium, showing where the stable charm and anti-charm quarks (known as valence quarks) are located, as well as the regions occupied by the virtual quarks constantly born and disappearing into the vacuum (called wee partons). Moreover, the study reveals that the structure of charmonium also includes entire virtual hadrons, such as pions, kaons and eta mesons, as well as glueballs consisting only of gluons.

“We found that charmonium is a layered system like an onion!” explained Zhao. “At the core are the valence quarks. Then, the valence core is surrounded by the so-called wee partons. The outermost layers of charmonium are glueballs and meson clouds. This layered structure is consistent with the picture established in high-energy collision experiments.”

Confirmation and implications

These findings represent a significant advancement in our understanding of strong interactions, confirming that approximate methods of QCD can accurately describe experimental data.

The researchers believe that the mass and pressure distributions inside charmonium that they derived can be measured with high precision in future high-energy collision experiments, such as those planned at the upcoming Electron-Ion Collider in the US. This would further validate the basis light-front quantization method.

The research is described in a paper in Physical Review D.

“I find this paper very interesting for two reasons,” explains Volker Burkert, of the Thomas Jefferson National Accelerator Facility in Virginia, who was not involved in this study. “First, it studied a particular type of hadrons, with one charm quark and one anti-charm quark, which are much heavier compared to quarks in a proton. Second, I find it of particular interest that the pressure structure of this particle is predicted to have several zero crossings. This may indicate that there are nearly disconnected pressure regions. It would be extremely interesting if this could be experimentally verified.”

Prospects and challenges

Zhao and colleagues hope to further refine their approach to study the structure of charmonium in even greater detail. For example, their method currently does not allow to differentiate the contributions of glueballs and mesons to the outermost shell of charmonium.

Additionally, the constant improvement of computer technology, crucial for such calculations, is expected to enhance the accuracy of their analyses and enable the application of their method to study other hadrons.

“In the future, we aim to extend our method to study the proton, neutron, and even nuclei,” concluded Zhao. “For complex systems like the proton, gluons play a crucial role. We need a better treatment of these interactions and, more importantly, a sophisticated computational scheme to handle the large scale of calculations required. Basis light-front quantization is designed to achieve this on near-term supercomputers or, ideally, on future quantum computers.”

The post Charmonium’s onion-like structure is revealed by new calculations appeared first on Physics World.

]]>
Research update Prediction could be tested by upcoming Electron-Ion Collider https://physicsworld.com/wp-content/uploads/2024/07/5-6-24-atoms-web-491519159-iStock_agsandrew.jpg newsletter1
Novel ‘glassy gel’ materials are strong yet stretchable https://physicsworld.com/a/novel-glassy-gel-materials-are-strong-yet-stretchable/ Thu, 04 Jul 2024 14:45:23 +0000 https://physicsworld.com/?p=115419 Glassy gels that can stretch up to five times their original length could find use in areas ranging from batteries to adhesives

The post Novel ‘glassy gel’ materials are strong yet stretchable appeared first on Physics World.

]]>
A new class of materials known as “glassy gels” could find use in areas ranging from batteries to adhesives, thanks to their unique set of physical properties.

Meixiang Wang, a post-doctoral fellow from Michael Dickey’s group at North Carolina State University, discovered these new materials while trying out different mixtures for making gels that she hoped would be useful ionic conductors.

Standard gels, such as those used to make contact lenses, are polymers with an added liquid solvent. The liquid weakens the interactions between the chains of molecules forming the polymer, allowing the gel to extend easily but leaving it soft and weak mechanically. In contrast, glassy polymers, like those suitable for airplane windows, contain no liquid and have strong interactions between their constituent polymer chains. This renders them stiff and strong but, in some cases, brittle.

Glassy gels, made by adding liquid solvent to glassy polymers, combine these properties: offering the high stiffness and high strength of glassy polymers alongside high extensibility – they can be stretched to over five times their original length without breaking.

“I thought it was eye-popping when Meixiang told me that these were the toughest gels ever reported by an order of magnitude, and had mechanical properties similar to plexiglass – even though plexiglass has no liquid, whereas these glassy gels are around 60% liquid,” Dickey tells Physics World.

Further tests by the research group, in collaboration with Wen Qian at the University of Nebraska–Lincoln, revealed that the glassy gels also show efficient electrical conduction (Wang’s original aim), good adhesive properties, shape memory characteristics and the ability to self-heal after being cut.

This unusual set of properties, detailed in Nature, is due to the solvent being an ionic liquid (salts in the liquid state). The ionic liquid solvent makes the glassy gels highly stretchable by pushing their polymer chains further apart. But simultaneously, its ions are strongly attracted to charged or polar molecules in the polymer, thereby keeping the polymer chains in place and making the material hard. The solvent ions also conduct electricity, resulting in better conduction than found in common plastics with similar stress–strain characteristics.

While the details of the ion–polymer bonding mechanism are not yet clear, early results indicate that it is electrostatic forces that act over a reasonably large distance. This, Dickey believes, is what makes the materials stiff despite containing so much liquid.

Another plus point for these new materials is their one-step manufacture. The mixture of ionic liquid solvent and liquid precursor of glassy polymers is simply poured into a mould before being cured for five minutes at room temperature with UV light to harden it ready for use.

“In contrast, almost all thermoplastics are made in chemical plants then shipped as resin to factories for melt processing,” says Dickey. He adds that glassy gels could also be 3D printed, and that products made from them would be easier to recycle than those manufactured from multiple plastics, which contain different constituents in order to get the required functionality.

In the future, Dickey plans to investigate why glassy gels are so sticky, alongside “tweaking the properties to optimize for particular applications” by changing the ratio of solvent to polymer and using differing types of both constituents. Optimized glassy gels could prove useful as mechanically robust, adhesive and conductive “separators” for keeping the two electrodes in a battery apart, for example, as adhesives, gaskets or seals, or even as heat-driven soft robotic grippers, since the material softens if sufficiently heated.

First, however, Dickey admits that a greater understanding of the gels’ characteristics – including UV stability and degradation over time – is required. But he is encouraged by enquiries from prospective users and optimistic about the potential for this chance discovery by Wang which, as he puts it, “was a bit of serendipity enabled by a researcher who was willing to follow her curiosity”.

The post Novel ‘glassy gel’ materials are strong yet stretchable appeared first on Physics World.

]]>
Research update Glassy gels that can stretch up to five times their original length could find use in areas ranging from batteries to adhesives https://physicsworld.com/wp-content/uploads/2024/07/4-07-24-Glassy-Gel.jpg
New diffractive camera hides images from view https://physicsworld.com/a/new-diffractive-camera-hides-images-from-view/ Thu, 04 Jul 2024 09:55:57 +0000 https://physicsworld.com/?p=115399 Technology based on diffractive optical process offers alternative to traditional encryption

The post New diffractive camera hides images from view appeared first on Physics World.

]]>
A schematic of the experiment

Information security is an important part of our digital world, and various techniques have been deployed to keep data safe during transmission. But while these traditional methods are efficient, the mere existence of an encrypted message can alert malicious third parties to the presence of information worth stealing. Researchers at the University of California, Los Angeles (UCLA), US, have now developed an alternative based on steganography, which aims to hide information by concealing it within ordinary-looking “dummy” patterns. The new method employs an all-optical diffractive camera housed within an electronic decoder network that the intended receiver can use to retrieve the original image.

“Cryptography and steganography have long been used to protect sensitive data, but they have limitations, especially in terms of data embedding capacity and vulnerability to compression and noise,” explains Aydogan Ozcan, a UCLA electrical and computer engineer who led the research. “Our optical encoder-electronic decoder system overcomes these issues, providing a faster, more energy-efficient and scalable solution for information concealment.”

A seemingly mundane and misleading pattern

The image-hiding process starts with a diffractive optical process that takes place in a structure composed of multiple layers of 3D-printed features. Light passing through these layers is manipulated to transform the input image into a seemingly mundane and misleading pattern. “The optical transformation happens passively,” says Ozcan, “leveraging light-matter interactions. This means it requires no additional power once physically fabricated and assembled.”

The result is an encoded image that appears ordinary to human observers, but contains hidden information, he tells Physics World.

The encoded image is then processed by an electronic decoder, which uses a convolutional neural network (CNN) that has been trained to decode the concealed data and reconstruct the original image. This optical-to-digital co-design ensures that only someone with the appropriate digital decoder can retrieve the hidden information, making it a secure and efficient method of protecting visual data.

A secure and efficient method for visual data protection

The researchers tested their technique using arbitrarily chosen hand-written digits as the input image. The diffractive processor successfully transformed these into a uniform-looking digit 8. The CNN was then able to reconstruct the original handwritten digits using information “hidden” in the 8.

All was not plain sailing, however, explains Ozcan. For one, the UCLA researchers had to ensure that the digital decoder could accurately reconstruct the original images despite the transformations applied by the diffractive optical processor. They also had to show that the device worked under different lighting conditions.

“Fabricating precise diffractive layers was no easy task either and meant developing the necessary 3D printing techniques to create highly precise structures that can perform the required optical transformations,” Ozcan says.

The technique, which is detailed in Science Advances, could have several applications. Being able to transmit sensitive information securely without drawing attention could be useful for espionage or defence, Ozcan suggests. The security of the technique and its suitability for image transmission might also improve patient privacy by making it easier to safely transmit medical images that only authorized personnel can access. A third application would be to use the technique to improve the robustness and security of data transmitted over optical networks, including free-space optical communications. A final application lies in consumer electronics. “Our device could potentially be integrated into smartphones and cameras to protect users’ visual data from unauthorized access,” Ozcan says.

The researchers demonstrated that their system works for terahertz frequencies of light. They now aim to expand its capabilities so that it can work with different wavelengths of light, including visible and infrared, which would broaden the scope of its applications. “Another area [for improvement] is in miniaturization to further reduce the size of the diffractive optical elements to make the technology more compact and scalable for commercial applications,” Ozcan says.

The post New diffractive camera hides images from view appeared first on Physics World.

]]>
Research update Technology based on diffractive optical process offers alternative to traditional encryption https://physicsworld.com/wp-content/uploads/2024/07/04-07-2024-Diffractive-camera-featured-crop.jpg newsletter1
Sliding ferroelectrics offer fast, fatigue-free switching https://physicsworld.com/a/sliding-ferroelectrics-offer-fast-fatigue-free-switching/ Wed, 03 Jul 2024 14:04:46 +0000 https://physicsworld.com/?p=115392 Two new studies show that unconventional ferroelectric materials can endure many switching cycles without losing their ferroelectric properties

The post Sliding ferroelectrics offer fast, fatigue-free switching appeared first on Physics World.

]]>
Three years ago, researchers from institutions in the US and Israel discovered a new type of ferroelectricity in a material called boron nitride (BN). The team called this new mechanism “slidetronics” because the change in the material’s electrical properties occurs when adjacent atomically-thin layers of the material slide across each other.

Two independent teams have now made further contributions to the slidetronics field. In the first, members of the original US-Israel group fabricated ferroelectric devices from BN that can operate at room temperature and function at gigahertz frequencies. Crucially, they found that the material can endure many “on-off” switching cycles without losing its ferroelectric properties – an important property for a future non-volatile computer memory. Meanwhile, a second team based in China found that a different sliding ferroelectric material, bilayer molybdenum disulphide (MoS2), is also robust against this type of fatigue.

The term “ferroelectricity” refers to a material’s ability to change its electrical properties in response to an applied electric field. It was discovered over a 100 years ago in certain naturally-occurring crystals and is now exploited in a range of technologies, including digital information storage, sensing, optoelectronics and neuromorphic computing.

Being able to switch a material’s electrical polarization over small areas, or domains, is a key part of modern computational technologies that store and retrieve large volumes of information. Indeed, the dimensions of individually polarizable domains (that is, regions with a fixed polarization) within the silicon-based devices commonly used for information storage have fallen sharply in recent years, from roughly 100 nm to mere atoms across. The problem is that as the number of polarization switching cycles increases, an effect known as fatigue occurs in these conventional ferroelectric materials. This fatigue degrades the performance of devices and can even cause them to fail, limiting the technology’s applications.

Alternatives to silicon

To overcome this problem, researchers have been studying the possibility of replacing silicon with two-dimensional materials such as hexagonal boron nitride (h-BN) and transition metal dichalcogenides (TMDs). These materials are made up of stacked layers held together by weak van der Waals interactions, and they can be as little as one atom thick, yet they remain crystalline, with a well-defined lattice and symmetry.

In one of the new works, researchers led by Kenji Yasuda of the School of Applied and Engineering Physics at Cornell University made a ferroelectric field-effect transistor (FeFET) based on sliding ferroelectricity in BN. They did this by sandwiching a monolayer of graphene between top and bottom layers of bulk BN, which behaves like a dielectric rather than a ferroelectric. They then inserted a parallel layer of stacked bilayer BN – the sliding ferroelectric – into this structure.

Yasuda and colleagues measured the endurance of ferroelectric switching in their device by repeatedly applying 100-nanosecond-long 3V pulses for up to 104 switching cycles. They then applied another square-shaped pulse with the same duration and a frequency of up to 107 Hz and measured the graphene’s resistance to show that the device’s ferroelectricity performance did not degrade. They found that the devices remained robust after more than 1011 switching cycles.

Immobile charged defects

Meanwhile, a team led by Fucai Liu of the University of Electronic Science and Technology of China, in collaboration with colleagues at Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences, Fudan University and Xi Chang University, demonstrated a second fatigue-free ferroelectric system. Their device was based on sliding ferroelectricity in bilayer 3R-MoS2 and was made by sandwiching this material between two BN layers using a process known as chemical vapour transport. When the researchers applied pulsed voltages of durations between 1 ms and 100 ms to the device, they measured a switching speed of 53 ns. They also found that it retains its ferroelectric properties even after 106 switching cycles of different pulse durations.

Based on theoretical calculations, Liu and colleagues showed that the material’s fatigue-free properties stem from immobile charged defects known as sulphur vacancies. In conventional ferroelectrics, these defects can migrate along the direction of the applied electrical field.

Reporting their work in Science, they argue that “it is reasonable to assume that fatigue-free is an intrinsic property of sliding ferroelectricity” and that the effect is an “innovative” solution to the problem of performance degradation in conventional ferroelectrics.

For their part, Yasuda and colleagues, whose work also appears in Science, are now exploring ways of synthesizing their material on a larger, wafer scale for practical applications. “Although we have shown that our device is promising for applications, we have only demonstrated the performance of a single device until now,” Yasuda tells Physics World. “In our current method, it takes many days of work to make just a single device. It is thus of critical importance to develop a scalable synthesis method.”

The post Sliding ferroelectrics offer fast, fatigue-free switching appeared first on Physics World.

]]>
Research update Two new studies show that unconventional ferroelectric materials can endure many switching cycles without losing their ferroelectric properties https://physicsworld.com/wp-content/uploads/2024/07/Low-Res_1.jpg newsletter1
ITER fusion reactor hit by massive decade-long delay and €5bn price hike https://physicsworld.com/a/iter-fusion-reactor-hit-by-massive-decade-long-delay-and-e5bn-price-hike/ Wed, 03 Jul 2024 10:36:29 +0000 https://physicsworld.com/?p=115383 Full operation with deuterium and tritium is not expected until 2039

The post ITER fusion reactor hit by massive decade-long delay and €5bn price hike appeared first on Physics World.

]]>
The ITER fusion reactor currently being built in France will not achieve first operation until 2034 – almost a decade later than previously planned and some 50 years after the project was first conceived in 1985. The decision by ITER management to take another 10 years constructing the machine means that the first experiments using “burning” fusion fuel – a mixture of deuterium and tritium (D–T) – will now have to wait until 2039.  The new “baseline” was agreed as a “working reference” by ITER’s governing council and will be further examined before a meeting in November.

ITER is an experimental fusion reactor that is currently being built in Cadarache, France, about 70 km north-west of Marseille. Expected to cost tens of billions of euros, it is a collaboration between China, Europe, India, Japan, Korea, Russia and the US.  Its main aim is to generate about 500 MW of fusion power over 400 seconds using a plasma heating of 50 MW, a power gain of 10. The reactor would also test a “steady state” operation under a power gain of five.

Yet since its conception in the 1980s (see timeline below), ITER has been beset with cost hikes and delays. In 2016, a baseline was presented in which the first deuterium plasma would be delayed until 2025.

This first plasma, however, would have been a brief machine test before further assembly, such as adding a divertor heat-exhaust system and further shielding. “The first plasma [in 2025] was rather symbolic,” claims ITER director-general Pietro Barabaschi, who took up the position in October 2022 following the death of former ITER director general Bernard Bigot.

ITER would only have reached full plasma current in 2032 with the first D–T reaction waiting until 2035 after the installation of additional components.

A new ‘baseline’

Barabaschi notes that since 2020 it was “clear” that the 2025 “first plasma” date was no longer achievable. This was due to several reasons, one of which was the COVID-19 pandemic, which led to supply-chain and quality-control delays.

Manufacturing issues also emerged such as the discovery of cracks in the water pipes that cool the thermal shields. In early 2022 the French Nuclear Safety Authority briefly halted assembly due to concerns over radiological shielding.

Officials then began working on a more realistic timeline for construction to allow for more testing of certain components such as the huge D-shaped toroidal-field coils that will be used to confine the plasma.

The plan now is to start operation in 2034 with a deuterium-only plasma but with more systems in place as compared to the previous “first plasma” baseline of 2025. Research on the tokamak would then be carried out for just over two years before the machine reaches full plasma current operation in 2036. The reactor would then shut down for further assembly to prepare for D-T operation, which is now expected to begin in 2039.

Speaking today at a press conference, Barabaschi notes that the delay will cost an extra €5bn. “We are still addressing the issue of cost with the ITER council,” adds Barabaschi, who did not want to be drawn on how much ITER will now cost overall due to the “complexity” of the way it is funded via “in-kind” contributions.

Sibylle Günter, scientific director of the Max Planck Institute for Plasma Physics in Garching, Germany, says that depite the news being of “no cause for celebration”, ITER is still relevant and necessary. “We are not aware of any project that will analyse the challenges as comprehensively as ITER in the foreseeable future,” she adds. “ITER has also already achieved ground-breaking engineering work up to this point, which will be important for all the fusion projects now underway and those still to come.”

In the meantime, some changes have been to ITER’s design. The material used for the “first wall” that directly faces the plasma will change from beryllium to tungsten. Barabaschi points out that tungsten is more relevant for a potential fusion demonstration plant, known as DEMO.

Officials were also celebrating the news this week that the 19 toroidal-field coils have been completed and delivered to the ITER site. Each coil – made of niobium-tin and niobium-titanium – is 17 m tall and 9 m across, and weighs about 360 tonnes. They will generate a magnetic field of 12 T and store 41 GJ of energy.

Timeline - the way to ITER

1985 US president Ronald Reagan and Soviet Union leader Mikhail Gorbachev, at their first summit meeting in Geneva, resolve to develop fusion energy “for the benefit of all mankind”.

1987 Work on the conceptual design begins, with the EU and Japan joining the US and Russia on the project. Conceptual design completed two years later.

1992 Work on the engineering design begins with teams at San Diego, Garching and Naka. Completed in 1997.

1998 US withdraws due to €10bn price tag.

2001 Revised design completed, resulting in the cost of the project being halved to €5bn.

2003 US re-joins ITER with China and South Korea also signing up. Partners meet but fail to agree on a site leading to an 18-month stalemate.

2005 The EU and Japan agree on ITER’s home being Cadarache in southern France.

2006 India joins ITER. The ITER Organization is formally established by treaty and civil engineering begins.

2010 Detailed design finalized. Cost estimate rises to around €15bn, with building construction starting.

2011 Construction delays push back the date of first plasma from 2016 to 2019, revised to 2020 a year later.

2014 An independent report warns that the project is in “a malaise” and recommends a management overhaul. Manufactured components of the reactor begin to arrive for assembly.

2016 ITER Council agrees new “baseline” plan with first plasma set for 2025 and deuterium–tritium fuel only being used from 2035 onwards.

2020 Assembly of ITER begins while the COVID-19 pandemic hits the project’s supply chain and quality control.

2024 New “baseline” announced for start of operation in 2034.

The post ITER fusion reactor hit by massive decade-long delay and €5bn price hike appeared first on Physics World.

]]>
News Full operation with deuterium and tritium is not expected until 2039 https://physicsworld.com/wp-content/uploads/2024/07/03-07-24-ITER.jpg newsletter
Physics cookbook is fun but fails to gel https://physicsworld.com/a/physics-cookbook-is-fun-but-fails-to-gel/ Wed, 03 Jul 2024 10:00:38 +0000 https://physicsworld.com/?p=115234 Megan Povey reviews Physics in the Kitchen by George Vekinis

The post Physics cookbook is fun but fails to gel appeared first on Physics World.

]]>
There’s a lot of physics in a cup of tea. Compounds in the tea leaves start to diffuse as soon as you pour over hot water and – if you look closely enough – you’ll see turbulence as your milk mixes in. This humble beverage also displays conservation of momentum in the parabolic vortex that forms when tea is stirred.

Tea is just one of the many topics covered in Physics in the Kitchen by George Vekinis, director of research at the National Research Centre “Demokritos” (NCSRD) in Greece. In writing this book, Vekinis – who is a materials physicist by training – joins a long tradition of scientists writing about cooking.

The book is full of insights into the physics and chemistry underlying creative processes in a kitchen, from making sauces and cooking vegetables to the use of acid and the science behind common equipment. One of the book’s strengths is that, while it has a logical structure, it is possible to dip in and out without reading everything.

Talking of dips, I particularly enjoyed the section on sauces. My experience in this area is confined to rouxs that are thickened with flour, and I was surprised to discover that Vekinis considers this to be a “bit of a cheat”. Sauces prepared in the “Greek way”, he points out, often don’t involve starch at all.

Instead, a smooth sauce can be made just by heating an acid such as wine or lemon with egg and broth. This ancient method, which the author describes in a long section on “Creamy emulsion or curdled mess?”, involves the extraction of small molecules and requires extra care to prevent curdling or splitting.

However, as a food physicist myself, I did have some issues with the science in this and later sections.

For example, Vekinis uses the word “gel” far too loosely. Sometimes he’s talking about the gels created when dissolved proteins form a solid-like network despite it mostly being liquid – such as the brown gel that appears below a roast ham or chicken that has cooled. However, he also uses the same word to describe what you get when starch granules swell and thicken when making a roux sauce, which is a very different process.

Moreover, Vekinis describes both kinds of gel as forming through “polymerization”, which is inaccurate. Polymerization is what happens when small molecular building blocks bond chemically together to form spindly, long-chain molecules. If these molecules link up, they can then form a branched gel, such as silicone, which has some structural similarities to a protein gel. However, the bonding process is very different, and I found this comparison with polymer science unhelpful.

Meanwhile, in the section “Wine, vinegar, and lemon”, we are told that to prepare a smooth sauce you have to boil “an acidic agent as a catalyst for a polymerization reaction” and that “dry wine does the job too”. Though the word is sometimes used colloquially, what is described here is not, in the scientific sense, a catalytic reaction.

Towards the end of the book, Vekinis moves beyond food and looks at the physics behind microwaves, fridges and other kitchen appliances. He describes, for example, how the oscillation of polar molecules such as water in microwaves produces heating that is completely distinct to a conventional oven.

It’s well known that a microwave oven doesn’t heat food uniformly and the book describes how standing waves in the oven produce hot and cold spots. However, I feel more could have been said about the effect of the shape and size of food on how it heats. There has been interesting work, for example, investigating the different heating patterns in square- and round-edged foods.

Overall, I found the book an enjoyable read even if Vekinis sometimes over-simplifies complicated subjects in his attempts to make tricky topics accessible. I shared the book with some teacher friends of mine, who all liked it too, saying they’d use it in their food-science lessons. They appreciated the way the book progresses from the simple (such as heat and energy) to the complex (such as advanced thermodynamic concepts).

Physics in the Kitchen is not meant to be a cookbook, but I do wonder if Vekinis – who describes himself as a keen cook as well as a scientist – could have made himself clearer by including a few recipes to illustrate the processes he describes. Knowing how to put them into practice will not only help us to make wonderful meals – but also enhance our enjoyment of them too.

  • 2023 Springer £17.99hb 208pp

The post Physics cookbook is fun but fails to gel appeared first on Physics World.

]]>
Opinion and reviews Megan Povey reviews Physics in the Kitchen by George Vekinis https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Povey-cup-of-tea-1441569995-iStock_Wirestock.jpg newsletter
Revised calibration curve improves radiocarbon dating of ancient Kyrenia shipwreck https://physicsworld.com/a/revised-calibration-curve-improves-radiocarbon-dating-of-ancient-kyrenia-shipwreck/ Wed, 03 Jul 2024 08:00:11 +0000 https://physicsworld.com/?p=115376 Researchers have placed tighter constraints on the age of the Kyrenia Ship that sank off the coast of Cyprus in the 3rd century BCE

The post Revised calibration curve improves radiocarbon dating of ancient Kyrenia shipwreck appeared first on Physics World.

]]>
The Kyrenia Ship is an ancient merchant vessel that sank off the coast of Cyprus in the 3rd century BCE. Through fresh analysis, a team led by Sturt Manning at Cornell University has placed tighter constraints on the age of the shipwreck. The researchers achieved this through a combination of techniques that improve the accuracy of radiocarbon dating, and reversing wood treatments that make dating impossible.

In the late 1960s, a diving expedition off the coast of Kyrenia, Northern Cyprus, uncovered the wreck of an ancient Greek merchant ship. With over half of its hull timbers still in good condition, the wreck was remarkably well preserved, and carried an archaeological treasure trove of valuable coins and artefacts.

“Ancient shipwrecks like these are amazing time capsules, since their burial in deeper water creates a near oxygen-free environment,” Manning explains. “This means that we get a remarkable preservation of materials like organics and metals, which usually do not preserve well in archaeological contexts.”

Following the discovery, the Kyrenia ship was carefully excavated and brought to the surface, where its timbers were treated to prevent further decay. In accordance with preservation techniques at the time, this involved impregnating the wood with polyethylene glycol (PEG) – but as archaeologists attempted to determine the age of the wreck through radiocarbon dating, this approach soon created problems.

To perform radiocarbon dating, researchers need to measure the amount of carbon-14 (14C) that a sample contains. This isotope is created naturally in the atmosphere and absorbed into wood through photosynthesis, but after the tree is cut down, it gradually decays into more stable isotopes (mainly 12C and 13C). This means that researchers can accurately estimate the age of a sample by measuring the proportion of 14C it contains, compared with 12C and 13C.

However, when samples from the Kyrenia ship were treated with PEG, the wood became contaminated with far older, petroleum-derived carbon. “Initially, it was not possible to get useful radiocarbon dates on the PEG-conserved wood,” Manning explains.

Kyrenia Ship hull after reassembly of recovered timbers

Recent archaeological studies indicate that the Kyrenia ship had likely sunk between 294 and 290 BCE. But radiocarbon dating using the most up-to-date version of the radiocarbon “calibration curve” for this period – which accounts for how concentrations of 14C in the atmosphere vary over time – still didn’t align with the archaeological constraints.

“With the current internationally approved methods, radiocarbon dates on some of the non-PEG-treated materials, such as almonds in the cargo, gave results inconsistent with any of the archaeological assessments,” says Manning.

To address this disparity, the researchers employed a combination of approaches to improve on previous estimates of the Kyrenia ship’s true age. Part of their research involved analysing the most up-to-date calibration curve for the period when the ship sank, and comparing it with wood samples that had been dated using a different technique: analysing their distinctive patterns of tree rings.

Tree-ring patterns vary from year to year due to short-term variations in rainfall, but are broadly shared by all trees growing in the same region at a given time. Taking advantage of this, Manning’s team carried out radiocarbon dating on a number of samples that had already been dated from their tree ring patterns.

“We used known-age tree-rings from the western US and the Netherlands to redefine the atmospheric radiocarbon record in the northern hemisphere over the period between 400 and 250 BCE,” Manning explains. Variations between atmospheric concentrations of 14C differ between Earth’s hemispheres, since the northern hemisphere contains far more vegetation overall.

In addition to revising the radiocarbon calibration curve, the team also investigated new techniques for cleaning PEG from contaminated samples. They tested the techniques on samples dating from around 60 CE, which had undergone radiocarbon dating before being treated with PEG. They showed that with the appropriate sample pretreatment, they could closely reproduce these known dates.

By combining these techniques, the researchers had all the tools that they needed to constrain the age of the Kyrenia ship. “With a technique called Bayesian chronological modelling, we combined all the tree-ring information from the ship timbers, the radiocarbon dates, and the ship’s archaeological time sequence – noting how the ship’s construction must predate its last cargo and sinking,” Manning describes.

“The date for the ship is most likely between 293 and 271 BCE: confirming other recent arguments that the original late 4th century BCE date for the ship needs a little revision,” he says.

By constraining this date, Manning’s team hopes that the work could enable researchers to better understand where the Kyrenia ship and its numerous artefacts fit within the wider chronology of ancient Greece. In turn, their discoveries could ultimately help archaeologists and historians to deepen their understanding of a fascinating era in history.

The researchers report their findings in PLOS ONE.

The post Revised calibration curve improves radiocarbon dating of ancient Kyrenia shipwreck appeared first on Physics World.

]]>
Research update Researchers have placed tighter constraints on the age of the Kyrenia Ship that sank off the coast of Cyprus in the 3rd century BCE https://physicsworld.com/wp-content/uploads/2024/07/Kyrenia-Ship-Hull.jpg
New titanium:sapphire laser is tiny, low-cost and tuneable https://physicsworld.com/a/new-titaniumsapphire-laser-is-tiny-low-cost-and-tuneable/ Tue, 02 Jul 2024 13:54:06 +0000 https://physicsworld.com/?p=115369 Energy efficient and integrated device is pumped by an LED

The post New titanium:sapphire laser is tiny, low-cost and tuneable appeared first on Physics World.

]]>
A compact, integrated titanium:sapphire laser that needs only a simple green LED as a pump source has been created by researchers at Stanford University in the US. Their design reduces the cost and footprint of a titanium:sapphire laser by three orders of magnitude and the power consumption by two. The team believes its device represents a key step towards the democratization of a laser type that plays important roles in scientific research and industry.

Since its invention by Peter Moulton at the Massachusetts Institute of Technology in 1982, the titanium:sapphire laser has become an important research and engineering tool. This is thanks to its ability to handle high powers and emit either spectrally pure continuous wave signals or broadband, short pulses. Indeed, the laser was used to produce the first frequency combs, which play important roles in optical metrology.

Unlike numerous other types of lasers such as semiconductor lasers, titanium:sapphire lasers have proved extremely difficult to miniaturize because traditional designs require very high input power to achieve lasing. “Titanium:sapphire has the ability to output very high powers, but because of the way the laser level structure works – specifically the fluorescence has a very short lifetime – you have to pump very hard in order to see appreciable amounts of gain,” says Stanford’s Joshua Yang. Traditional titanium:sapphire lasers have to be pumped with high-powered lasers – and therefore cost in excess of $100,000.

Logic, sensing, and quantum computing

If titanium:sapphire lasers could be miniaturized and integrated into chips, potential applications would include optical logic, sensing and quantum computing. Last year, Yubo Wang and colleagues at Yale University unveiled a chip-integrated titanium:sapphire laser that utilized an indium gallium nitride pump diode coupled to a titanium:sapphire gain medium through its evanescent field. The evanescent component of the electromagnetic field does not propagate but decays exponentially with distance from the source. By reducing loss, this integrated setup reduced the lasing threshold by more than an order of magnitude. However, Jelena Vučković – the leader of the Stanford group – says that “the threshold was still relatively high because the overlap with the gain medium was not maximized”.

In the new research, Vučković’s group fabricated their laser devices by creating monocrystalline titanium:sapphire optical resonators about 40 micron across and less than 1 micron thick on a layer of sapphire using a silicon dioxide interface. The titanium:sapphire was then polished to within 0.1 micron smoothness using reactive ion etching. The resonators achieved almost perfect overlap of the pump and lasing modes, which led to much less loss and a lasing threshold 22 times lower than in any titanium:sapphire laser used previously. “All the fabrication processes are things that can be done in most traditional clean rooms and are adaptable to foundries,” says Yang – who is first author of a paper in Nature that describes the new laser.

The researchers achieved lasing with a $37 green laser diode as the pump. However, subsequent experiments described in the paper used a tabletop green laser because the team is still working to couple the cheaper laser into the system into the system effectively.

Optimization challenge

“Being able to complete the whole picture of diode to on-chip laser to systems applications is really just an optimization challenge, and of course one we’re really excited to work on,” says Yang. “But even with the low optimization we start with, it’s still able to achieve lasing.”

The researchers went on to demonstrate two things that had never been achieved before. First, they incorporated the tunability so valued in titanium:sapphire lasers into their system by using an integrated heater to modify the refractive index of the resonator, allowing it to lase in different modes. They achieved single mode lasing in a range of over 50 nm, and believe that it should be possible, with optimization, to extend this to several hundred nanometres.

They also performed a cavity quantum electrodynamics experiment with colour centres in silicon carbide using their light source: “That’s why [titanium:sapphire] lasers are so popular in quantum optics labs like ours,” says Vučković; “If people want to work with different colour centres or quantum dots, they don’t have a specific wavelength at which they work.” The use of silicon carbide is especially significant, she says, because it is becoming popular in the high-power electronics used in systems like electric cars.

Finally, they produced a titanium:sapphire laser amplifier, something that the team says has not been reported before. They injected 120 pJ pulses from a commercial titanium:sapphire laser and amplified them to 2.3 nJ over a distance of 8 mm down the waveguide. The distortion introduced by the amplifier was the lowest allowed by the laws of wave motion – something that had not been possible for any integrated amplifier at any wavelength.

Yubo Wang is impressed: “[Vučković and colleagues have] achieved several important milestones, including very low-threshold lasing, very high-power amplification and also tuneable laser integration, which are all very nice results,” he says. “At the end of the paper, they have a compelling demonstration of cavity-integrated artificial atoms using their titanium:sapphire laser.” He says he would be interested to see if the team could produce multiple devices simultaneously at wafer scale. He also believes it would be interesting to look at integration of other visible-wavelength lasers: “I’m expecting to see more results in the next few years,” he says.

The post New titanium:sapphire laser is tiny, low-cost and tuneable appeared first on Physics World.

]]>
Research update Energy efficient and integrated device is pumped by an LED https://physicsworld.com/wp-content/uploads/2024/07/2-7-24-TiSapphire-laser.jpg newsletter1
How to get the errors out of quantum computing https://physicsworld.com/a/how-to-get-the-errors-out-of-quantum-computing/ Tue, 02 Jul 2024 12:26:20 +0000 https://physicsworld.com/?p=115371 Last week’s Quantum 2.0 conference laid out possible complementary approaches to the field’s biggest challenge

The post How to get the errors out of quantum computing appeared first on Physics World.

]]>
All of today’s quantum computers are prone to errors. These errors may be due to imperfect hardware and control systems, or they may arise from the inherent fragility of the quantum bits, or qubits, used to perform quantum operations. But whatever their source, they are a real problem for anyone seeking to develop commercial applications for quantum computing. Although noisy, intermediate-scale quantum (NISQ) machines are valuable for scientific discovery, no-one has yet identified a commercial NISQ application that brings value beyond what is possible with classical hardware. Worse, there is no immediate theoretical argument that any such applications exist.

It might sound like a downbeat way of opening a scientific talk, but when Christopher Eichler made these comments at last week’s Quantum 2.0 conference in Rotterdam, the Netherlands, he was merely reflecting what has become accepted wisdom within the quantum computing community. According to this view, the only way forward is to develop fault-tolerant computers with built-in quantum error correction, using many flawed physical qubits to encode each perfect (or perfect-enough) logical qubit.

That isn’t going to be easy, acknowledged Eichler, a physicist at FAU Erlingen, Germany. “We do have to face a number of engineering challenges,” he told the audience. In his view, the requirements of a practical, error-corrected quantum computer include:

  • High-fidelity gates that are fast enough to perform logical operations in a manageable amount of time
  • More and better physical qubits with which to build the error-corrected logical qubits
  • Fast mid-circuit measurements for “syndromes”, which are the set of eigenvalues that make it possible to infer (using classical decoding algorithms) which errors have happened in the middle of a computation, rather than waiting until the end.

The good news, Eichler continued, is that several of today’s qubit platforms are already well on their way to meeting these requirements. Trapped ions offer high-fidelity, fault-tolerant qubit operations. Devices that use arrays of neutral atoms as qubits are easy to scale up. And qubits based on superconducting circuits are good at fast, repeatable error correction.

The bad news is that none of these qubit platforms ticks all of those boxes at once. This means that no out-and-out leader has emerged, though Eichler, whose own research focuses on superconducting qubits, naturally thinks they have the most promise.

In the final section of his talk, Eichler suggested a few ways of improving superconducting qubits. One possibility would be to discard the current most common type of superconducting qubit, which is known as a transmon, in favour of other options. Fluxonium qubits, for example, offer better gate fidelities, with 2-qubit gate fidelities of up to 99.9% recently demonstrated. Another alternative superconducting qubit, known as a cat qubit, exhibits lifetimes of up to 10 seconds before it loses its quantum nature. However, in Eichler’s view, it’s not clear how either of these qubits might be scaled up to multi-qubit processors.

Another promising strategy (not unique to superconducting qubits) Eichler mentioned is to convert dominant types of errors into events that involve a qubit being erased instead of changing state. This type of error should be easier (though still not trivial) to detect. And many researchers are working to develop new error correction codes that operate in a more hardware-efficient way.

Ultimately, though, the jury is still out on how to overcome the problem of error-prone qubits. “Moving forward, one should very broadly study all these platforms,” Eichler concluded. “One can only learn from one another.”

The post How to get the errors out of quantum computing appeared first on Physics World.

]]>
Blog Last week’s Quantum 2.0 conference laid out possible complementary approaches to the field’s biggest challenge https://physicsworld.com/wp-content/uploads/2024/07/02-07-2024-Christopher-Eichler-at-Quantum-2.0-2.jpg newsletter
Oculomics: a window to the health of the body https://physicsworld.com/a/oculomics-a-window-to-the-health-of-the-body/ Tue, 02 Jul 2024 10:00:14 +0000 https://physicsworld.com/?p=115256 Alistair Bounds explains the eye-screening technologies that can help us detect and monitor chronic diseases

The post Oculomics: a window to the health of the body appeared first on Physics World.

]]>
More than 13 million eye tests are carried out in the UK each year, making it one of the most common medical examinations in the country. But what if eye tests could tell us about more than just the health of the eye? What if these tests could help us spot some of humanity’s greatest healthcare challenges, including diabetes, Alzheimer’s or heart disease?

It’s said that the eye is the “window to the soul”. Just as our eyes tell us lots about the world around us, so they can tell us lots about ourselves. Researchers working in what’s known as “oculomics” are seeking ways to look at the health of the body, via the eye. In particular, they’re exploring the link between certain ocular biomarkers (changes or abnormalities in the eye) with systemic health and disease. Simply put, the aim is to unlock the valuable health data that the eye holds on the body (Chronic Disease. Ophthalmol. Ther. 13 1427).

Oculomics is particularly relevant when it comes to chronic conditions, such as dementia, diabetes and cardiovascular disease. They make up most of the “burden of disease” (a factor that is calculated by looking at the sum of the mortality and morbidity of a population) and account for around 80% of deaths in industrialized nations. We can reduce how many people die or get ill from such diseases through screening programmes. Unfortunately, most diseases don’t get screened for and – even when they do – there’s limited or incomplete uptake.

Cervical-cancer screening, for example, is estimated to have saved the lives of one in 65 of all British-born women since 1950 (Lancet 364 249), but nearly a third of eligible women in the UK do not attend regular cervical screening appointments. This highlights the need for new and improved screening methods that are as non-intimidating, accessible and patient-friendly as a trip to a local high-street optometrist.

Seeing the light: the physics and biology of the eye

In a biological sense, the eye is fantastically complex. It can adapt from reading this article directly in front of you to looking at stars that are light-years away. The human eye is a dynamic living tissue that can operate across six orders of brightness magnitude, from the brightest summer days to the darkest cloudy nights.

The eye has several key structures that enable this (figure 1). At the front, the cornea is the eye’s strongest optical component, refracting light as it enters the eye to form an image at the back of the eye. The iris allows the eye to adapt to different light levels, as it changes size to control how much light enters the eye. The crystalline lens provides depth-dynamic range, changing size and shape to focus on objects nearby or far away from the eye. The aqueous humour (a water-like fluid in front of the lens) and the vitreous humour (a gel-like liquid between the lens and the retina) give the eye its shape, and provide the crucial separation over which the refraction of light takes place. Finally, light reaches the retina, where the “pixels” of the eye – the photoreceptors – detect the light.

1 Look within

Diagram of the eye with labels including iris, cornea and vitreous humour

The anatomy of the human eye, highlighting the key structures including the iris, cornea, the lens and the retina.

The tissues and the fluids in the eye have optical characteristics that stem from their biological properties, making optical methods ideally suited to study the eye. It’s vital, for example, that the aqueous humour is transparent – if it were opaque, our vision would be obscured by our own eyes. The aqueous humour also needs to fulfil other biological properties, such as providing nutrition to the cornea and lens.

To do all these things, our bodies produce the aqueous humour as an ultrafiltered blood plasma. This plasma contains water, amino acids, electrolytes and more, but crucially no red blood cells or opaque materials. The molecules in the aqueous humour reflect the molecules in the blood, meaning that measurements on the aqueous humour can reveal insights into blood composition. This link between optical and biological properties is true for every part of the eye, with each structure potentially revealing insights into our health.

Chronic disease insights and AI

Currently, almost all measurements we take of the eye are to discern the eye’s health only. So how can these measurements tell us about chronic diseases that affect other parts of the body? The answer lies in both the incredible properties of the eye, and data from the sheer number of eye examinations that have taken place.

Chronic diseases can affect many different parts of the body, and the eye is no exception (figure 2). For example, cardiovascular disease can change artery and vein sizes. This is also true in the retina and choroid (a thin layer of tissue that lies between the retina and the white of the eye) – in patients with high blood pressure, veins can become dilated, offering optometrists and ophthalmologists insight into this aspect of a patient’s health.

For example, British optometrist and dispensing optician Jason Higginbotham, points out that throughout his career “Many eye examinations have yielded information about the general health of patients – and not just their vision and eye health. For example, in some patients, the way the arteries cross over veins can ‘squash’ or press on the veins, leading to a sign called ‘arterio-venous nipping’. This is a possible indicator of hypertension and hardening of the arteries.”

Higginbotham, who is also the managing editor of Myopia Focus, adds that “Occasionally, one may spot signs of blood-vessel leakage and swelling of the retinal layers, which is indicative of active diabetes. For me, a more subtle sign was finding the optic nerves of one patient appearing very pale, almost white, with them also complaining of a lack of energy, becoming ‘clumsier’ in their words and finding their vision changing, especially when in a hot bath. This turned out to be due to multiple sclerosis.”

2 Interconnected features

Diagram of the eye with labels explaining detectable changes that occur

Imaging the eye may reveal ocular biomarkers of systemic disease, thanks to key links between the optical and biological properties of the eye. With the emergence of oculomics, it may be possible – through a standard eye test – to detect cardiovascular diseases; cancer; neurodegenerative disease such as Alzheimer’s, dementia and Parkinson’s disease; and even metabolic diseases such as diabetes.

However, precisely because there are so many things that can affect the eye, it can be difficult to attribute changes to a specific disease. If there is something abnormal in the retina, could this be an indicator of cardiovascular disease, or could it be diabetes? Perhaps it is a by-product of smoking – how can an optometrist tell?

This is where the sheer number of measurements becomes important. The NHS has been performing eye tests for more than 60 years, giving rise to databases containing millions of images, complete with patient records about long-term health outcomes. These datasets have been fed into artificial intelligence (AI) deep-learning models to identify signatures of disease, particularly cardiovascular disease (British Journal of Ophthalmology 103 67J Clin Med. 10.3390/jcm12010152). Models can now predict cardiovascular risk factors with accuracy that is comparable to the current state-of-the-art. Also, new image-analysis methods are under constant development, allowing further signatures of cardiovascular disease, diabetes and even dementia to be spotted in the eye.

But bias is a big issue when it comes to AI-driven oculomics. When algorithms are developed using existing databases, groups or communities with historically worse healthcare provision will be under-represented in these databases. Consequently, the algorithms may perform worse for them, which risks embedding past and present inequalities into future methods. We have to be careful not to let such biases propagate through the healthcare system – for example, by drawing on multiple databases from different countries to reduce sensitivities to country-specific bias.

Although AI oculomics methods have not yet moved beyond clinical research, it is only a matter of time. Ophthalmology companies such as Carl Zeiss Meditec (Ophthalmology Retina 7 1042) and data companies such as Google are developing AI methods to spot diabetic retinopathy and other ophthalmic diseases. Regulators are also engaging more and more with AI, with the FDA having reviewed at least 600 medical devices that incorporate AI or machine learning across medical disciplines, including nine in the ophthalmology space, by October 2023.

Eye on the prize

So how far can oculomics go? What other diseases could be detected by analysing hundreds of thousands of images? And, more importantly, what can be detected with only one image or measurement of the eye?

Ultimately, the answer lies in matching the imaging technique to the disease. It is critical to choose the measurement technique that fits the disease. So, if we want to detect more diseases, we need more measurement techniques.

At Occuity, a UK-based medical technology company, we are developing solutions to some of humanity’s greatest health challenges through optical diagnostic technologies. Our aim is to develop pain-free, non-contact screening and monitoring of chronic health conditions, such as glaucoma, myopia, diabetes and Alzheimer’s disease (Front Aging Neurosci.13 720167). We believe that the best way that we can improve health is by developing instruments that can spot specific signatures of disease. This would allow doctors to start treatments earlier, give researchers a better understanding of the earliest stages of disease, and ultimately, help people live healthier, happier lives.

Currently, we are developing a range of instruments that target different diseases by scanning a beam of light through the different parts of the eye and measuring the light that comes back. Our first instruments measure properties such as the thickness of the cornea (needed for accurate glaucoma diagnosis); and the length of the eyeball, which is key to screening and monitoring the epidemic of myopia, which is expected to affect half of the world’s population by 2050. As we advance these technologies, we open up opportunities for new measurements to advance scientific research and clinical diagnostics.

Looking into the past

The ocular lens provides a remarkable record of our molecular history because, unlike many other ocular tissues, the cells within the lens do not get replaced as people age. This is particularly important for a family of molecules dubbed “advanced glycation end-products”, or AGEs. These molecules are waste products that build up when glucose levels are too high. While present in everybody, they occur in much higher concentrations in people with diabetes and pre-diabetes people who have higher blood-glucose levels and are at high risk of developing diabetes, but largely without symptoms. Measurements of a person’s lens AGE concentration may therefore indicate their diabetic state.

Fortunately, these AGEs have a very important optical property – they fluoresce. Fluorescence is a process where an atom or molecule absorbs light at one colour and then re-emits light at another colour – it’s why rubies glow under ultraviolet light. The lens is the perfect place to look for these AGEs, as it is very easy to shine light into the lens. Luckily, a lot of this fluorescence makes it back out of the lens, where it can be measured (figure 3).

3 AGEs and fluorescence

Graph with x axis labelled fluorescence and y axis labelled age. The data are spread out but roughly follow a line that is gently rising from left to right

Fluorescence, a measure of advanced glycation end-products (AGE) concentration, rises as people get older. However, it increases faster in diabetes as higher blood-glucose levels accelerate the formation of AGEs, potentially making lens fluorescence a powerful tool for detecting diabetes and pre-diabetes. This chart shows rising fluorescence as a function of both age and diabetic status, taken as part of an internal Occuity trial on 21 people using a prototype instrument; people with diabetes are shown by orange points and people without diabetes are shown by blue points. Error bars are the standard deviation of three measurements. These measurements are non-invasive, non-contact and take just seconds to perform.

Occuity has developed optical technologies that measure fluorescence from the lens as a potential diabetes and pre-diabetes screening tool, building on our optometry instruments. Although they are still in the early stages of development, the first results taken earlier this year are promising, with fluorescence clearly increasing with age, and strong preliminary evidence that the two people with diabetes in the dataset have higher lens fluorescence than those without diabetes. If these results are replicated in larger studies, this will show that lens-fluorescence measurement techniques are a way of screening for diabetes and pre-diabetes rapidly and non-invasively, in easily accessible locations such as high-street optometrists and pharmacists.

Such a tool would be revolutionary. Almost five million people in the UK have diabetes, including over a million with undiagnosed type 2 diabetes whose condition goes completely unmonitored. There are also over 13 million people with pre-diabetes. If they can be warned before they move from pre-diabetes to diabetes, early-stage intervention could reverse this pre-diabetic state, preventing progression to full diabetes and drastically reducing the massive impact (and cost) of the illness.

Living in the present

Typical diabetes management is invasive and unpleasant, as it requires finger pricks or implants to continuously monitor blood glucose levels. This can result in infections, as well as reduce the effectiveness of diabetes management, leading to further complications. Better, non-invasive glucose-measurement techniques could transform how patients can manage this life-long disease.

As the aqueous humour is an ultra-filtered blood plasma, its glucose concentration mimics that of the glucose concentration in blood. This glucose also has an effect on the optical properties of the eye, increasing the refractive index that gives the eye its focusing power (figure 4).

4 Measuring blood glucose level

Graph with x axis labelled refractive index and y axis labelled glucose concentration. The data points show a gradually rising line from left to right

The relationship between blood glucose and optical measurements on the eye has been probed theoretically and experimentally at Occuity. Their goal is to create a non-invasive, non-contact measure of blood glucose concentration for diabetics. Occuity has shown that changes in glucose concentration comparable to that observed in blood has a measurable effect on refractive index in cuvettes and is moving towards equivalent measurements in the anterior chamber.

As it happens, the same techniques that we at Occuity use to measure lens and eyeball thickness can be used to measure the refractive index of the aqueous humour, which correlates with glucose concentration. Preliminary cuvette-based tests are close to being precise enough to measure glucose concentrations to the accuracy needed for diabetes management – non-invasively, without even touching the eye. This technique could transform the management of blood-glucose levels for people with diabetes, replacing the need for repetitive and painful finger pricks and implants with a simple scan of the eye.

Eye on the future

As Occuity’s instruments become widely available, the data that they generate will grow, and with AI-powered real-time data analysis, their predictive power and the range of diseases that can be detected will expand too. By making these data open-source and available to researchers, we can continuously expand the breadth of oculomics.

Oculomics has massive potential to transform disease-screening and diagnosis through a combination of AI and advanced instruments. However, there are still substantial challenges to overcome, including regulatory hurdles, issues with bias in AI, adoption into current healthcare pathways, and the cost of developing new medical instruments.

Despite these hurdles, the rewards of oculomics are too great to pass up. Opportunities such as diabetes screening and management, cardiovascular risk profiling and early detection of dementia offer massive health, social and economic benefits. Additionally, the ease with which ocular screening can take place removes major barriers to the uptake of screening.

With more than 35,000 eye exams being carried out in the UK almost every day, each one offers opportunities to catch and reverse pre-diabetes, to spot cardiovascular risk factors and propose lifestyle changes, or to identify and potentially slow the onset of neurodegenerative conditions. As oculomics grows, the window to health is getting brighter.

The post Oculomics: a window to the health of the body appeared first on Physics World.

]]>
Feature Alistair Bounds explains the eye-screening technologies that can help us detect and monitor chronic diseases https://physicsworld.com/wp-content/uploads/2024/06/2024-07-Bounds-PM1-Screen-View-1.jpg newsletter
Satellites burning up in the atmosphere may deplete Earth’s ozone layer https://physicsworld.com/a/satellites-burning-up-in-the-atmosphere-may-deplete-earths-ozone-layer/ Mon, 01 Jul 2024 08:00:39 +0000 https://physicsworld.com/?p=115323 Pollution from decommissioned satellites re-entering the atmosphere poses a risk to the Earth’s protective ozone layer

The post Satellites burning up in the atmosphere may deplete Earth’s ozone layer appeared first on Physics World.

]]>
The increasing deployment of extensive space-based infrastructure is predicted to triple the number of objects in low-Earth orbit over the next century. But at the end of their service life, decommissioned satellites burn up as they re-enter the atmosphere, triggering chemical reactions that deplete the Earth’s ozone layer.

Through new simulations, Joseph Wang and colleagues at the University of Southern California have shown how nanoparticles created by satellite pollution can catalyse chemical reactions between ozone and chlorine. If the problem isn’t addressed, they predict that the level of ozone depletion could grow significantly in the coming decades.

From weather forecasting to navigation, satellites are a vital element of many of the systems we’ve come to depend on. As demand for these services continues to grow, swarms of small satellites are being rolled out in mega-constellations such as Starlink. As a result, low-Earth orbit is becoming increasingly cluttered with manmade objects.

Once a satellite reaches the end of its operational lifetime, international guidelines suggest that it should re-enter the atmosphere within 25 years to minimize the risk of collisions with other satellites. Yet according to Wang’s team, re-entries from a growing number of satellites are a concerning source of pollution; and one that has rarely been considered so far.

As they burn up on re-entry, satellites can lose between 51% and 95% of their mass – and much of the vaporized material they leave behind will remain in the upper atmosphere for decades.

One particularly concerning component of this pollution is aluminium, which makes up close to a third of the mass of a typical satellite. When left in the upper atmosphere, aluminium will react with the surrounding oxygen, creating nanoparticles of aluminium oxide (AlO). Although this compound isn’t reactive itself, its nanoparticles have large surface areas and excellent thermal stability, making them extremely effective at catalysing reactions between ozone and chlorine.

For this ozone–chlorine reaction to occur, chlorine-containing compounds must first be converted into reactive species – which can’t happen without a catalyst. Typically, catalysts come in the form of tiny, solid particles found in stratospheric clouds, which provide surfaces for the chlorine activation reaction to occur. But with higher concentrations of AlO nanoparticles in the upper atmosphere, the chlorine activation reaction can occur more readily – depleting the vital layer that protects Earth’s surface from damaging UV radiation.

Backwards progress

The ozone layer has gradually started to recover since the signing in 1987 of the Montreal Protocol – in which all UN member states agreed to phase out production of the substances primarily responsible for ozone depletion. With this new threat, however, Wang’s team predict that much of this progress could be reversed if the problem isn’t addressed soon.

In their study, reported in Geophysical Research Letters, the researchers assessed the potential impact of satellite-based pollution through molecular dynamics simulations, which allowed them to calculate the mass of ozone-depleting nanoparticles produced during satellite re-entry.

They discovered that a small 250 kg satellite can generate around 30 kg of AlO nanoparticles. By extrapolating this figure, they estimated that in 2022 alone, around 17 metric tons of AlO compounds were generated by satellites re-entering the atmosphere. They also found that the nanoparticles may take up to 30 years to drift down from the mesosphere into the stratospheric ozone layer, introducing a noticeable delay between satellite decommissioning and eventual ozone depletion in the stratosphere.

Extrapolating their findings further, Wang’s team then considered the potential impact of future mega-constellation projects currently being planned. Altogether, they estimate that some 360 metric tons of AlO nanoparticles could enter the upper atmosphere each year if these plans come to fruition.

Although these estimates are still highly uncertain, the researchers’ discoveries clearly highlight the severity of the threat that decommissioned satellites pose for the ozone layer. If their warning is taken seriously, they hope that new strategies and international guidelines could eventually be established to minimize the impact of these ozone-depleting nanoparticles, ensuring that the ozone layer can continue to recover in the coming decades.

The post Satellites burning up in the atmosphere may deplete Earth’s ozone layer appeared first on Physics World.

]]>
Research update Pollution from decommissioned satellites re-entering the atmosphere poses a risk to the Earth’s protective ozone layer https://physicsworld.com/wp-content/uploads/2024/06/1-07-24-Starlink_Satellites_over_Carson_National_Forest-M-Lewinsky.jpg newsletter1
Ask me anything: Catherine Phipps – ‘Seeing an aircraft take off and knowing you contributed to the engine design is an amazing feeling’ https://physicsworld.com/a/ask-me-anything-catherine-phipps-seeing-an-aircraft-take-off-and-knowing-you-contributed-to-the-engine-design-is-an-amazing-feeling/ Fri, 28 Jun 2024 10:00:33 +0000 https://physicsworld.com/?p=115214 Catherine Phipps describes her life as an engineer at aircraft-engine manufacturer Rolls-Royce

The post Ask me anything: Catherine Phipps – ‘Seeing an aircraft take off and knowing you contributed to the engine design is an amazing feeling’ appeared first on Physics World.

]]>
Catherine Phipps

What skills do you use every day in your job?

I originally joined Rolls-Royce to use my physics skills in an engineering environment and see them applied in the real world. My plan was to work in the materials department, thinking that would align with my degree. But after completing the graduate training scheme, I chose to join the mechanical-integrity team working on demonstrator engines. A few years later, I moved to Berlin to focus on small engines for civil aerospace before returning to Derby in the UK, where I’m now a mechanical integrity engineer working on large civil engines.

A large part of my job involves understanding how materials behave in extreme conditions, such as high temperature or extreme stress. I might, for example, run simulations to see how long a new component will last or if it will corrode.

I’ll also design programmes to test how components behave when the engine runs in a particular way. The results of these tests are then fed back into the models to validate predictions and improve the simulations. Statistical analysis skills are vital too, as is the ability to make rapid judgements. Above all, I need to consider and understand any safety implications and consider what might happen if the component fails.

It’s a team role, working alongside people from numerous other disciplines such as aerodynamics, fluid mechanics and materials, and everyone brings their own skills. We need to make sure our designs are cost-effective, meet weight targets, and can be manufactured consistently and to the right standard. It’s immensely challenging work, which means I need to collaborate, communicate and – where acceptable – compromise.

What do you like least and best about your job?

Best has to be the people. It’s inspiring and motivating to work day in, day out in an international environment with talented, innovative and dedicated colleagues from varied backgrounds and with different life experiences. Sharing knowledge and coaching younger members of the team is also rewarding. Plus, seeing an aircraft take off and knowing you contributed to the engine design is an amazing feeling.

I did have a seven-year career break to have children, after which I was shocked at how much my colleagues had progressed. I felt in awe and inadequate. It was challenging to return, but everyone assured me the laws of physics hadn’t changed and I soon got back up to speed. The hardest time for me, though, was working from home during COVID-19. Meetings continued online, but I missed the chance conversations with colleagues where we’d run ideas past each other and I’d learn useful information. I felt siloed and it was hard to share knowledge. The line between work and home was blurred and it was always tempting to leave the laptop on and “just finish something” after dinner.

What do you know today you wish you knew when you were starting your career?

First, don’t think you always have to know the answer and don’t be afraid to ask questions. You won’t look stupid and you’ll learn from the responses. When you start working, it’s easy to think you should know everything, but I’m still learning and questioning all these years later. New ideas and perspectives are always valuable, so stay curious and keep wondering “Why?” and “What if?”. You may unlock something new. Second, just because you start on one route, don’t think you can’t do something different. Your career will probably span several decades so when new opportunities arise, don’t be afraid to take them.

The post Ask me anything: Catherine Phipps – ‘Seeing an aircraft take off and knowing you contributed to the engine design is an amazing feeling’ appeared first on Physics World.

]]>
Interview Catherine Phipps describes her life as an engineer at aircraft-engine manufacturer Rolls-Royce https://physicsworld.com/wp-content/uploads/2024/06/2024-06-AMA-Catherine-Phipps-listing.jpg newsletter
Mitigating tokamak plasma disruption bags Plasma Physics and Controlled Fusion Outstanding Paper Prize https://physicsworld.com/a/mitigating-tokamak-plasma-disruption-bags-plasma-physics-and-controlled-fusion-outstanding-paper-prize/ Fri, 28 Jun 2024 09:00:08 +0000 https://physicsworld.com/?p=115270 Vinodh Bandaru and colleagues win for their research on simulating “runaway electron beam termination”

The post Mitigating tokamak plasma disruption bags <em>Plasma Physics and Controlled Fusion</em> Outstanding Paper Prize appeared first on Physics World.

]]>
Vinodh Bandaru from the Indian Institute of Technology in Guwahati, India, and colleagues have been awarded the 2024 Plasma Physics and Controlled Fusion (PPCF) Outstanding Paper Prize for their research on “relativistic runaway electron beam termination” at the Joint European Torus (JET) fusion experiment in Oxfordshire.

The work examines the termination of relativistic electron beam events that occurred during experiments on JET, which was operated at the Culham Centre for Fusion Energy until earlier this year. A better understanding of such dynamics could help the successful mitigation of plasma disruptions, which lead to energy losses in the plasma. The work could also be useful for experiments that will take place on the ITER experimental fusion tokamak, which is currently under construction in Cadarache, France.

Awarded each year, the PPCF prize aims to highlight work of the highest quality and impact published in the journal.  The award was judged on originality, scientific quality and impact as well as being based on community nominations and publication metrics. The prize will be presented at the 50th European Physical Society Conference on Plasma Physics in Salamanca, Spain, on 8–12 July.

Jonathan Graves from the University of York, UK, who is PPCF editor-in-chief, calls the work is “outstanding”. “[It] explores state of the art simulations with coupled runaway electron physics, presented together with convincing comparison against disrupting JET tokamak plasmas,” he says. “The development is critically important for the safe operation of future reactor devices.”

Below, Bandaru talks to Physics World about the prize, his research and what advice he has for early-career researchers.

What does winning the 2024 PPCF Outstanding Paper Prize mean to you?

The award means a lot to me, as a recognition of the hard work that went into the research. I would like to thank my co-authors for their valuable contributions and PPCF for considering the paper.

How important is it that researchers receive recognition for their work?

Receiving recognition is encouraging for researchers and can give an extra boost and motivation in their scientific pursuits. This is more so given the nature and dynamics of contemporary research work. This new initiative from PPCF is very welcome and commendable.

What advice would you give to early-career researchers looking to pursue a career in plasma physics?

Having worked in a few different fields over the years, I can say that plasma physics is one area that entails significant complexity due to the shear range of length and timescales of the physical processes involved. This not only offers interesting and challenging problems, but also allows them to choose from a variety of problems over the course of one’s research career.

How so?

Fusion science has now reached an inflection point with enormous ongoing activity involving research labs, universities as well as start-ups all over the world. With several big and important projects underway such as ITER and the planned Spherical Tokamak for Energy Production in the UK, plasma researchers can not only make important, concrete and impactful contributions, but can also have a relatively visible long-term career path. I would say these are really exciting times to be in plasma physics.

The post Mitigating tokamak plasma disruption bags <em>Plasma Physics and Controlled Fusion</em> Outstanding Paper Prize appeared first on Physics World.

]]>
Interview Vinodh Bandaru and colleagues win for their research on simulating “runaway electron beam termination” https://physicsworld.com/wp-content/uploads/2024/06/vinodh_bandaru.jpeg newsletter
Shrinivas Kulkarni: 2024 Shaw Prize in Astronomy winner talks about his fascination with variable and transient objects https://physicsworld.com/a/shrinivas-kulkarni-2024-shaw-prize-in-astronomy-winner-talks-about-his-fascination-with-variable-and-transient-objects/ Thu, 27 Jun 2024 13:55:32 +0000 https://physicsworld.com/?p=115131 This podcast is sponsored by The Shaw Prize Foundation

The post Shrinivas Kulkarni: 2024 Shaw Prize in Astronomy winner talks about his fascination with variable and transient objects appeared first on Physics World.

]]>
This episode features an in-depth  conversation with Shrinivas Kulkarni, who won the 2024 Shaw Prize in Astronomy “for his ground-breaking discoveries about millisecond pulsars, gamma-ray bursts, supernovae, and other variable or transient astronomical objects”. Based at Caltech in the US, he is also cited for his “leadership of the Palomar Transient Factory and its successor, the Zwicky Transient Facility, which have revolutionized our understanding of the time-variable optical sky”.

Kulkarni talks about his fascination with astronomical objects that change over time and he reveals the principles that have guided his varied and successful career. He also offers advice to students and early-career researchers about how to thrive in astronomy.

This podcast also features an interview with Scott Tremaine, who is chair of the selection committee for the 2024 Shaw Prize in Astronomy. Based at the Institute for Advanced Study in Princeton, New Jersey, he talks about Kulkarni’s many contributions to astronomy, including his work to make astronomical data more accessible to researchers not affiliated with major telescopes.

This podcast is sponsored by The Shaw Prize Foundation

The post Shrinivas Kulkarni: 2024 Shaw Prize in Astronomy winner talks about his fascination with variable and transient objects appeared first on Physics World.

]]>
Podcasts This podcast is sponsored by The Shaw Prize Foundation https://physicsworld.com/wp-content/uploads/2024/06/Shri-Kulkarni-list.jpg newsletter
Bringing the second quantum revolution to the rest of the world https://physicsworld.com/a/bringing-the-second-quantum-revolution-to-the-rest-of-the-world/ Thu, 27 Jun 2024 13:02:53 +0000 https://physicsworld.com/?p=115311 A panel discussion about quantum technologies in low and middle-income countries offers perspectives on how to overcome barriers to participation

The post Bringing the second quantum revolution to the rest of the world appeared first on Physics World.

]]>
Quantum technologies have enormous potential, but achieving that potential is not going to be cheap. The US, China and the EU have already invested more than $50 billion between them in quantum computing, quantum communications, quantum sensing and other areas that make up the so-called “second quantum revolution”. Other high-income countries, notably Australia, Canada and the UK, have also made significant investments. But what about the rest of the world? How can people in other countries participate in (and benefit from) this quantum revolution?

In a panel discussion at Optica’s Quantum 2.0 conference, which took place this week in Rotterdam in the Netherlands, five scientists from low- and middle-income countries took turns addressing this question. The first, Tatevik Chalyan, drew sympathetic nods from her fellow panellists and moderator Imrana Ashraf when she described herself as “part of the generation forced to leave Armenia to get an education”. Since then, she said, the Armenian government has become more supportive, building on a strong tradition of research in quantum theory. Chalyan, however, is an experimentalist, and she and many of her former classmates are still living abroad – in her case, as a postdoctoral researcher in silicon photonics at the Vrije Universiteit Brussel, Belgium.

Another panellist, Vatshal Srivastav, followed a similar path, studying at the Indian Institute of Technology (IIT) in Kanpur before moving to the UK’s Heriot-Watt University to do his PhD and postdoc on higher-dimensional quantum circuits. He, too, thinks things are improving back home, with the quality of research in the IIT network becoming high enough that many of his friends chose to remain there. Countries that want to improve their research base, he said, should find ways to “keep good people within your system”.

For panellist Taofiq Paraiso, who says he was “brought up in several African countries” before moving to EPFL in Switzerland for his master’s and PhD, the starting point is simple. “It’s about transferring skills and knowledge,” said Paraiso, who now leads a team developing chip-based hardware for quantum cryptography at Toshiba Europe’s Cambridge Research Laboratory in the UK. People who return to their home countries after being educated abroad have an important role to play in that, he added.

Returning is not always easy, though. The remaining two panellists, Roger Alfredo Kögler and Rodrigo Benevides, are both from Brazil, and Kögler, who did his PhD in Brazil’s Instituto Nacional de Ciência e Tecnologia de Informação Quântica, said that Brazilians who want to become professors in their home country are strongly urged to go abroad for their postdoctoral research. But now that he has seen the resources available to him as a postdoc in nano-optics at the Humboldt University of Berlin, Germany, Kögler admitted that he is “rethinking whether I want to go back” even though he worries that staying in Europe would make him “part of the problem”.

It’s much easier to freely have ideas if you have a lot of money

Rodrigo Benevides

Benevides, whose PhD was split between Brazil’s University of Campinas and the Netherlands’ TU Delft, elaborated on the reasons for this dilemma. In Brazil, he and his colleagues “used to see all these papers in Nature or Science” while they were “in the lab just trying to make our laser work”. That kind of atmosphere, he said, “leads to a lack of self-confidence” because people begin to suspect that they, and not the system, are the problem. Now, as a postdoc working on hybrid quantum systems at ETH Zurich in Switzerland, Benevides wryly observed that “it’s much easier to freely have ideas if you have a lot of money”.

As for how to remedy these challenges, Benevides argued that the solutions will be diverse and tailored to local circumstances. As an example, Paraiso highlighted the work of an outreach organization, Photonics Ghana, that motivates students to engage with quantum science. He also suggested that cloud-based quantum computing and freely available software packages such as IBM’s Qiskit will help organizations that lack the resources to build a quantum computer of their own. Chalyan, for her part, pointed out that a lack of resources sometimes has a silver lining. Coming up with creative work-arounds, she said, “is what we are famous for [as] people from developing countries”.

Finally, several panellists emphasized the need to focus on quantum technologies that will make a difference locally. Though Kögler warned that it is hard to predict what will turn out to be “useful”, a few answers are already emerging. “Maybe we don’t need quantum error correction, but we do need a quantum sensor that brings better agriculture,” Benevides suggested. Paraiso noted that information security is important in African countries as well as European ones, and added that quantum key distribution is one of the more mature quantum technologies. Whatever the specifics, though, Srivastav recommended identifying the problems your society is facing and figuring out how they overlap with your current research. “As scientists, it is our job to make things better,” he concluded.

The post Bringing the second quantum revolution to the rest of the world appeared first on Physics World.

]]>
Blog A panel discussion about quantum technologies in low and middle-income countries offers perspectives on how to overcome barriers to participation https://physicsworld.com/wp-content/uploads/2024/06/27-06-2024-Panel-on-developing-world-quantum-tech.jpg
Shapeshifting organism uses ‘cellular origami’ to extend to 30 times its body length https://physicsworld.com/a/shapeshifting-organism-uses-cellular-origami-to-extend-to-30-times-its-body-length/ Thu, 27 Jun 2024 08:00:53 +0000 https://physicsworld.com/?p=115242 Researchers discover a new geometric mechanism previously unknown in biology

The post Shapeshifting organism uses ‘cellular origami’ to extend to 30 times its body length appeared first on Physics World.

]]>
For the first time, two researchers in the US have observed the intricate folding and unfolding of “cellular origami”. Through detailed observations, Eliott Flaum and Manu Prakash at Stanford University discovered helical pleats in the membrane of a single-celled protist, which enable the organism to reversibly extend to over 30 times its own body length. The duo now hopes that the mechanism could inspire a new generation of advanced micro-robots.

A key principle in biology is that a species’ ability to survive is intrinsically linked with the physical structure of its body. One group of organisms where this link is still poorly understood are protists: single-celled organisms that have evolved to thrive in almost every ecological niche on the planet.

Although this extreme adaptability is known to stem from the staggering variety of shapes, sizes and structures found in protist cells, researchers are still uncertain as to how these structures have contributed to their evolutionary success.

In their study, reported in Science, Flaum and Prakash investigated a particularly striking feature found in a protist named Lacrymaria olor. Measuring 40 µm in length, this shapeshifting organism hunts its prey by launching a neck-like like feeding apparatus up to 1200 µm in less than 30 s. Afterwards, the protrusion retracts just as quickly: an action that can be repeated over 20,000 times throughout the cell’s lifetime.

Through a combination of high-resolution fluorescence and electron microscopy techniques, the duo found that this extension occurs through the folding and unfolding of an intricate helical structure in L. olor’s cytoskeleton membrane. These folds occur along bands of microtubule filaments embedded in the membrane, which group together to form accordion-like pleats.

Altogether, Flaum and Prakash found 15 of these pleats in L. olor’s membrane, which wrap around the cell in elegant helical ribs. The structure closely resembles “curved crease origami”, a subset of traditional origami in which folds follow complex curved paths instead of straight ones.

“When you store pleats on the helical angle in this way, you can store an infinite amount of material,” says Flaum in a press statement. “Biology has figured this out.”

“It is incredibly complex behaviour,” adds Prakash. “This is the first example of cellular origami. We’re thinking of calling it lacrygami.”

Perfection in projection

A further striking feature of L. olor’s folding mechanism is that the transition between its folded and unfolded states can happen thousands of times without making a single error: a feat that would be incredibly difficult to reproduce in any manmade mechanism with a similar level of intricacy.

To explore the transition in more detail, Flaum and Prakash investigated points of concentrated stress within the cell’s cytoskeleton. Named “topological singularities”, the positions of these points are intrinsically linked to the membrane’s helical geometry.

The duo discovered that L. olor’s transition is controlled by two types of singularity. The first of these is called a d-cone: a point where the cell’s surface develops a sharp, conical point due to the membrane bending and folding without stretching. Crucially, a d-cone can travel across the membrane in a neat line, and then return to its original position along the exact same path as the membrane folds and unfolds.

The second type of topological singularity is called a twist singularity, and occurs in the membrane’s microtubule filaments through their rotational deformation. Just like the d-cone, this singularity will travel along the filaments, then return to its original position as the cell folds and unfolds.

As Prakash explains, both singularities are key to understanding how L. olor’s transition is so consistent. “L. olor is bound by its geometry to fold and unfold in this particular way,” he says. “It unfolds and folds at this singularity every time, acting as a controller. This is the first time a geometric controller of behaviour has been described in a living cell.”

The researchers hope that their remarkable discovery could provide new inspiration for our own technology. By replicating L. olor’s cellular origami, it may be possible to design micro-scale machines whose movements are encoded into patterns of pleats and folds in their artificial membranes. If achieved, such structures could be suitable for a diverse range of applications: from miniature surgical robots to deployable habitats in space.

The post Shapeshifting organism uses ‘cellular origami’ to extend to 30 times its body length appeared first on Physics World.

]]>
Research update Researchers discover a new geometric mechanism previously unknown in biology https://physicsworld.com/wp-content/uploads/2024/06/26-06-24-lacry-skeleton.jpg newsletter
When the world went wild for uranium: tales from the history of a controversial element https://physicsworld.com/a/when-the-world-went-wild-for-uranium-tales-from-the-history-of-a-controversial-element/ Wed, 26 Jun 2024 10:00:34 +0000 https://physicsworld.com/?p=115057 Margaret Harris reviews Chain Reactions: a Hopeful History of Uranium by Lucy Jane Santos

The post When the world went wild for uranium: tales from the history of a controversial element appeared first on Physics World.

]]>
A 1950s children's board game called Uranium Rush

The uranium craze that hit America in the 1950s was surely one of history’s strangest fads. Jars of make-up lined with uranium ore were sold as “Revigorette” and advertised as infusing “beautifying radioactivity [into] every face cream”. A cosmetics firm applied radioactive soil to volunteers’ skin and used Geiger counters to check whether its soap could wash it away. Most astonishing of all, a uranium mine in the US state of Montana developed a sideline as a health spa, inviting visitors to inhale “a constant supply of radon gas” for the then-substantial sum of $10.

The story of this craze, and much else besides, is entertainingly told in Lucy Jane Santos’ new book Chain Reactions: a Hopeful History of Uranium. Santos is an expert in the history of 20th-century leisure, health and beauty rather than physics, but she is nevertheless well-acquainted with radioactive materials. Her previous book, Half Lives, focused on radium, which had an equally jaw-dropping consumer heyday earlier in the 20th century.

The shift to uranium gives Santos the license to explore several new topics. For physicists, the most interesting of these is nuclear power. Before we get there, though, we must first pass through uranium’s story from prehistoric times up to the end of the Second World War. From the uranium-bearing silver mines of medieval Jachymóv, Czechia, to the uranium enrichment facilities founded in Oak Ridge, Tennessee as part of the Manhattan Project, Santos tells this story in a breezy, anecdote-driven style. The fact that many of her chosen anecdotes also appear in other books on the histories of quantum mechanics, nuclear power or atomic weapons is hardly her fault. This is well-trodden territory for historians and publishers alike, and there are only so many quirky stories to go around.

The most novel factor that Santos brings to this crowded party is her regular references to people whose role in uranium’s history is often neglected. This includes not only female scientists like Lise Meitner (co-discoverer of nuclear fission) and Leona Woods (maker of the boron trifluoride counter used in the first nuclear-reactor experiment), but also the “Calutron Girls”, who put in 10-hour shifts six days a week at the Oak Ridge plant and were not allowed to know that they were enriching uranium for the first atomic bomb. Other “hidden figures” include the Allied prisoners who worked the Jachymóv mines for the Nazis; the political “undesirables” who replaced them after the Soviets took over; and the African labourers who, though legally free, experienced harsh conditions while mining uranium ore at Shinkolobwe (now in the Democratic Republic of the Congo) for the Belgians and, later, the Americans.

Most welcome of all, though, are the book’s references to the roles of Indigenous peoples. When Robert Oppenheimer’s Manhattan Project needed a facility for transmuting uranium into plutonium, Santos notes that members of the Wanapum Nation in eastern Washington state were given “a mere 90 days to pack up and abandon their homes…mostly with little compensation”. The 167 residents of Bikini island in the Pacific were even less fortunate, being “temporarily” relocated before the US Army tested an atomic bomb on their piece of paradise. Santos quotes the American comedian Bob Hope – nobody’s idea of a woke radical – in summing up the result of this callous act: “As soon as the war ended, we located the one spot on Earth that hadn’t been touched by war and blew it to hell.”

The most novel factor that Santos brings to this crowded party is her regular references to people whose role in uranium’s history is often neglected

These injustices, together with the radiation-linked illnesses experienced by the (chiefly Native American) residents of the Trinity and Nevada test sites, are not the focus of Chain Reactions. It could hardly be “a hopeful history” if they were. But while mentioning them is a low bar, it’s a low bar that the three-hour-long Oscar-winning biopic Oppenheimer didn’t manage to clear. If Santos can do it in a book not even 300 pages long, no-one else has any excuse.

Chain Reactions is not a science-focused book, and in places it feels a little thin. For example, while Santos correctly notes that the “gun” design of the first uranium bomb wouldn’t work for a plutonium weapon, she doesn’t say why. Later, she states that “making a nuclear reactor safe enough and small enough for use in a car proved impossible”, but she leaves out the scientific and engineering reasons for this. The book’s most eyebrow-raising scientific statement, though, is that “nuclear is one of the safest forms of electricity produced – only beaten by solar”. This claim is neither explained nor footnoted, and it left me wondering, first, what “safest” means in this context, and second what makes wind, geothermal and tidal electricity less “safe” than nuclear or solar?

Despite this, there is much to enjoy in Santos’ breezy and – yes – hopeful history. Although she is blunt when discussing the risks of nuclear energy, she also points out that when countries stop using it, they mostly replace nuclear power plants with fossil-fuel ones. This, she argues, is little short of disastrous. Quite apart from the climate impact, ash from coal-fired power plants carries radiation from uranium and thorium into the environment “at a much larger rate than any from a nuclear power plant”. Thus, while the 2011 meltdown of Japan’s Fukushima reactors killed no-one directly, Japan and Germany’s subsequent phase-out of nuclear power contributed to an estimated 28,000 deaths from air pollution. Might a revival of nuclear power be better? Santos certainly thinks so, and she concludes her book with a slogan that will have many physicists nodding along: “Nuclear power? Yes please.”

  • 2024 Icon Books 288pp £20hb

The post When the world went wild for uranium: tales from the history of a controversial element appeared first on Physics World.

]]>
Opinion and reviews Margaret Harris reviews Chain Reactions: a Hopeful History of Uranium by Lucy Jane Santos https://physicsworld.com/wp-content/uploads/2024/06/2024-06-Harris_uraniumrush_feature.jpg newsletter
Classical models of gravitational field show flaws close to the Earth https://physicsworld.com/a/classical-models-of-gravitational-field-show-flaws-close-to-the-earth/ Wed, 26 Jun 2024 08:00:57 +0000 https://physicsworld.com/?p=115237 New gravitational field model quantifies the "divergence problem" identified in 2022

The post Classical models of gravitational field show flaws close to the Earth appeared first on Physics World.

]]>
If the Earth was a perfect sphere or ellipsoid, modelling its gravitational field would be easy. But it isn’t, so geoscientists instead use an approximate model based on a so-called Brillouin sphere. This is the smallest geocentric sphere that the entire planet fits inside, and it touches the Earth at a single point: the summit of Mount Chimborazo in Ecuador, near the crest of the planet’s equatorial bulge.

For points outside this Brillouin sphere, traditional methods based on spherical harmonic (SH) expansions produce a good approximation of the real Earth’s gravitational field. But for points inside it – that is, for everywhere on or near the Earth’s surface below the peak of Mount Chimborazo – these same SH expansions generate erroneous predictions.

A team of physicists and mathematicians from the universities of Ohio State and Connecticut in the US has now probed the difference between the model’s predictions and the actual field. Led by Ohio State geophysicist Michael Bevis, the team showed that the SH expansion equations diverge below the Brillouin sphere, leading to errors. They also quantified the scale of these errors.

Divergence is a genuine problem

Bevis explains that the initial motivation for the study was to demonstrate through explicit examples that a mathematical theory proposed by Ohio State’s Ovidiu Costin and colleagues in 2022 was correct. This landmark paper was the first to show that SH expansions of the gravitational potential always diverge below the Brillouin sphere, but “at the time, many geodesists and geophysicists found the paper somewhat abstract”, Bevis observes. “We wanted to convince the physics community that divergence is a genuine problem, not just a formal mathematical result. We also wanted to show how this intrinsic divergence produces model prediction errors.”

In the new study, the researchers demonstrated that divergence-driven prediction error increases exponentially with depth beneath the Brillouin sphere. “Furthermore, at a given point in free space beneath the sphere, we found that prediction error decreases as truncation degree N increases towards its optimal value, Nopt,” explains Bevis. Beyond this point, however, “further increasing N will cause the predictions of the model to degrade [and] when N >> Nopt, prediction error will grow exponentially with increasing N.”

The most important practical consequence of the analysis, he tells Physics World, was that it meant they could quantify the effect of this mathematical result on the prediction accuracy of any gravitational model formed from a so-called truncated SH expansion – or SH polynomial – anywhere on or near the surface of the Earth.

Synthetic planetary models

The researchers obtained this result by taking a classic theory developed by Robert Werner of the University of Texas at Austin in 1994 and using it to write code that simulates the gravitational field created by a polyhedron of constant density. “This code uses arbitrary precision arithmetic,” explains Bevis, “so it can compute the gravitational potential and gravitational acceleration g anywhere exterior to a synthetic planet composed of hundreds or thousands of faces with a triangular shape.

“The analysis is precise to many hundreds of significant digits, both above and below the Brillouin sphere, which allowed us to test and validate the asymptotic expression derived by Costin et al. for the upper limit on SH model prediction error beneath the Brillouin sphere.”

The new work, which is described in Reports on Progress in Physics, shows that traditional SH models of the gravitational field are fundamentally flawed when they are applied anywhere near the surface of the planet. This is because they are attempting to represent a definite physical quantity with a series that is actually locally diverging. “Our calculations emphasize the importance of finding a new approach to representing the external gravitational field beneath the Brillouin sphere,” says Bevis. “Such an approach will have to avoid directly evaluating SH polynomials.”

Ultimately, generalizations of the new g simulator will help researchers formulate and validate the next generation of global gravity models, he adds. This has important implications for inertial navigation and perhaps even the astrophysics of exoplanets.

The team is now working to improve the accuracy of its gravity simulator so that it can better model planets with variable internal density and more complex topography. They are also examining analytical alternatives to using SH polynomials to model the gravitational field beneath the Brillouin sphere.

The post Classical models of gravitational field show flaws close to the Earth appeared first on Physics World.

]]>
Research update New gravitational field model quantifies the "divergence problem" identified in 2022 https://physicsworld.com/wp-content/uploads/2024/06/bevis-image.jpg newsletter
Battery boss: physicist Martin Freer will run UK’s Faraday Institution https://physicsworld.com/a/battery-boss-physicist-martin-freer-will-run-uks-faraday-institution/ Tue, 25 Jun 2024 13:00:54 +0000 https://physicsworld.com/?p=115245 Freer will take up the role on 2 September replacing Pam Thomas as chief executive officer

The post Battery boss: physicist Martin Freer will run UK’s Faraday Institution appeared first on Physics World.

]]>
The nuclear physicist Martin Freer is to be the next chief executive of the Faraday Institution – the UK’s independent institute for electrochemical energy-storage research. Freer, who is currently based at the University of Birmingham, will take up the role on 2 September. He replaces the condensed-matter physicist Pam Thomas, who stepped down in April after almost four years as boss.

The Faraday Institution was set up in 2017 to help research scientists and industry experts to reduce the cost and weight of batteries and improve their performance and reliability. From its base at the Harwell Science and Innovation Campus in Oxfordshire, it carries out research, training, market analysis and early-stage commercialization, with the research programme currently involving 27 UK universities and 50 businesses.

With a PhD in nuclear physics from Birmingham, Freer has held a number of high-profile roles in the energy sector, including director of the Birmingham Centre for Nuclear Education and Research, which he established in 2010. Five years later he became director of the university’s Birmingham Energy Institute.

Freer also steered activity on the influential Physics Powering the Green Economy report released last year by the Institute of Physics, which publishes Physics World. The report set out the role that physics and physicists can play in fostering the green economy.

Freer told Physics World that joining the Faraday Institution is a “tremendous opportunity”, especially when it comes to the transition to electric vehicles and ensuring that UK battery innovation plays an integral part.

“Energy storage is going to be needed to manage our future energy system from domestic to grid scale and there is a crucial role for the Faraday Institution to play,” says Freer. “This is a globally competitive sector, and the UK needs to ensure it does not lose the advantage it has created for itself through the Faraday Battery Challenge.”

Theoretical physicist Steven Cowley, who is chair elect of the Faraday Institution, notes that Freer is a “proven leader” and is a “terrific fit” for the institution.

“[Freer] knows first-hand what it takes to work with industry and policy makers to translate research into future energy technologies on the ground,” notes Cowley, who is director of the Princeton Plasma Physics Laboratory in the US. “[He] will help to accelerate its mission as it further establishes itself in the UK’s research ecosystem.”

The post Battery boss: physicist Martin Freer will run UK’s Faraday Institution appeared first on Physics World.

]]>
News Freer will take up the role on 2 September replacing Pam Thomas as chief executive officer https://physicsworld.com/wp-content/uploads/2024/06/freer-martin-24-06-2024.jpg newsletter
Dark matter’s secret identity: WIMPs or axions? https://physicsworld.com/a/dark-matters-secret-identity-wimps-or-axions/ Tue, 25 Jun 2024 10:00:06 +0000 https://physicsworld.com/?p=115005 Keith Cooper explores rival theories, ambitious experiments and the ongoing race to understand why so much of the universe is invisible

The post Dark matter’s secret identity: WIMPs or axions? appeared first on Physics World.

]]>
A former South Dakota gold mine is the last place you might think to look to solve one of the universe’s biggest mysteries. Yet what lies buried in the Sanford Underground Research Facility, 1.47 km beneath the surface, could be our best chance of detecting the ghost of the galaxy: dark matter.

Deep within those old mine tunnels, accessible only by a shaft from the surface, is seven tonnes of liquid xenon, sitting perfectly still (figure 1).

This is the LUX-ZEPELIN (LZ) experiment. It’s looking for the tiny signatures that dark matter is predicted to leave in its wake as it passes through the Earth. To have any chance of success, LZ needs to be one of the most sensitive experiments on the planet.

“The centre of LZ, in terms of things happening, is the quietest place on Earth,” says Chamkaur Ghag, a physicist from University College London in the UK, and spokesperson for the LZ collaboration. “It is the environment in which to look for the rarest of interactions.”

For more than 50 years astronomers have puzzled over the nature of the extra gravitation first observed in galaxies by Vera Rubin, assisted by Kent Ford, who noticed stars orbiting galaxies under the influence of more gravity than could be accounted for by visible matter. (In the 1930s Fritz Zwicky had noticed a similar phenomenon in the movement of galaxies in the Coma Cluster.)

Most (though not all – see part one of this series “Cosmic combat: delving into the battle between dark matter and modified gravity“) scientists believe this extra mass to be dark matter. “We see these unusual gravitational effects, and the simplest explanation for that, and one that seems self-consistent so far, is that it’s dark matter,” says Richard Massey, an astrophysicist from Durham University in the UK.

The standard model of cosmology tells us that about 27% of all the matter and energy in the universe is dark matter, but no-one knows what it actually is. One possibility is a hypothetical breed of particle called a weakly interacting massive particle (WIMP), and it is these particles that LZ is hoping to find. WIMPs are massive enough to produce a substantial gravitational field, but they otherwise only gently interact with normal matter via the weak force.

With more questions than answers, the search for dark matter is heading for a showdown

“The easiest explanation to solve dark matter would be a fundamental particle that interacts like a WIMP,” says Ghag. Should LZ fail in its mission, however, there are other competing hypotheses. One in particular that is lurking in the wings is a lightweight competitor called the axion.

Experiments are under way to pin down this vast, elusive portion of the cosmos. With more questions than answers, the search for dark matter is heading for a showdown.

Going deep underground

According to theory, as our solar system cruises through space we’re moving through a thin fog of dark matter. Most of the dark-matter particles, being weakly interacting, would pass through Earth, but now and then a WIMP might interact with a regular atom.

This is what LZ is hoping to detect, and the seven tonnes of liquid xenon are designed to be a perfect WIMP trap. The challenge the experiment faces is that even if a WIMP were to interact with a xenon atom, it has to be differentiated from the other particles and radiation, such as gamma rays, that could enter the liquid.

1 Buried treasure

The LUX-ZEPELIN (LZ) experiment

The seven-tonne tank of liquid xenon that comprises the LZ detector. The experiment is located almost a mile beneath the Earth to reduce background effects, which astronomers hope will enable them to identify weakly interacting massive particles (WIMPs).

Both a gamma ray and a WIMP can create a cloud of ionized free electrons inside the detector, and in both cases, when the ionized electrons recombine with the xenon atoms, they emit flashes of light. But both mechanisms are slightly different, and LZ is designed to detect the unique signature of a WIMP interaction.

When a gamma ray enters the detector it can interact with an electron in the xenon, which flies off and causes a chain of ionizations by interacting with other neighbouring electrons. The heavy WIMP, however, collides with the xenon nucleus, sending it spinning through the liquid, bumping into other nuclei, and indirectly ionizing a few atoms along the way.

To differentiate these two events, an electric field of a few tens of kilovolts is cast across the xenon tank, drawing some of the ionized electrons toward the top of the tank before they can recombine. When these electrons reach the top, they enter a thin layer of gas and produce another, second, burst of light.

When a gamma ray enters the tank, the second flash is brighter than the first – the recoil electron flies off like a bullet, and most of the electrons it liberates are pulled up by the detector before they recombine.

A nucleus is much heavier than an electron, so when a WIMP interacts with the xenon, the path of the recoil is shorter. The cloud of electrons generated by the interaction is therefore localized to a smaller area and more of the electrons find a “partner” ion to recombine with before the electric field can pull them away. This means that for a WIMP, the first flash is brighter than the second.

In practice, there is a range of brightnesses depending upon the energies of the particles, but statistically an excess of brighter first flashes above a certain background level would be a strong signature of WIMPs.

“Looking for dark matter experimentally is about understanding your backgrounds perfectly,” explains Ghag. “Any excess or hint of a signal above our expected background model – that’s what we’re going to use to ascribe statistical significance.”

LZ is now up and running, as of late 2021, and has completed about 5% of its search. Before it could begin its hunt, the project had to endure a five-year process to screen every component of the detector, to make sure that the background effects of every nut, bolt and washer have been accounted for.

WIMPs in crisis?

How many, if any, WIMPs are detected will inform physicists about the interaction cross-section of the dark-matter particle – meaning how likely it is to interact with normal matter it comes into proximity with.

The timing couldn’t be more crucial. Some of the more popular WIMP candidates are predicted by a theory called “supersymmetry”, which posits that every particle in the Standard Model has a more massive “superpartner” with a different quantum spin. Some of these superpartners were candidates for WIMPs but the Large Hadron Collider (LHC) has failed to detect them, throwing the field – and the hypothetical WIMPs associated with them – into crisis.

Francesca Chadha-Day, a physicist who works at Durham University and who studies dark-matter candidates based on astrophysical observations, thinks time may be up for supersymmetry. “The standard supersymmetric paradigm hasn’t materialized, and I think it might be in trouble,” she says.

Ruling out WIMPs now would be like building the LHC but stopping before turning it on

Chamkaur Ghag

She does, however, stress that supersymmetry is “only one source of WIMPs”. Supersymmetry was proposed to explain certain problems in physics, such as why gravity is more feeble than the weak force. Even if supersymmetry is a dead end, there are alternative theories to solve these problems that also predict the existence of particles that could be WIMPs.

“It’s way too early to give up on WIMPs,” adds Ghag. LZ needs to run for at least 1000 days to reach its full sensitivity and he says that ruling out WIMPs now would be “like building the LHC but stopping before turning it on”.

The axion universe

With question marks nevertheless hanging over WIMPs, an alternative type of dark-matter particle has been making waves.

Dubbed axions, Chadha-Day describes them as “dark matter for free”, because they were developed to solve an entirely different problem.

“There’s this big mystery in particle physics that we call the Strong CP Problem,” says Chadha-Day. C refers to charge and P, parity. The CP problem describes how, if you switch a particle for its oppositely charged antiparticle and swap it for a spatial mirror image, the laws of physics would still function the same for it.

The Standard Model predicts that the strong force, which glues quarks together inside protons and neutrons, should actually violate CP symmetry. Yet in practice, it plays ball with the conservation of charge and parity. Something is intervening and interacting with the strong force to maintain symmetry. This something is proposed to be the axion.

“The axion is by far the most popular way of solving the Strong CP Problem because it is the simplest,” says Chadha-Day. “And then when you look at the properties of the axion you also find that it can act as dark matter.”

Supersymmetry’s difficulties have seen a recent boom in support for axions as dark matter

These properties include rarely interacting with other particles and sometimes being non-relativistic, meaning that some axions would move slowly enough to clump into haloes around galaxies and galaxy clusters, which would account for their additional mass. Like WIMPs, however, axions have yet to be detected.

Supersymmetry’s difficulties have seen a recent boom in support for axions as dark matter. “There are strong motivations for axions,” says Ghag, “Because they could exist even if they are not dark matter.”

Lensing patterns

Axions are predicted to be lighter than WIMPs and to interact with matter via the electromagnetic force (and gravity) rather than the weak force. Experiments to directly detect axions use magnetic fields, because in their presence an axion can transform into a photon. However, because axions might exist even if they aren’t dark matter, to test them against WIMPs, physicists have to take a different approach.

The extra mass from dark matter around galaxies and galaxy clusters can bend the path of light coming from more distant objects, magnifying them and warping their appearance, sometimes even producing multiple images (figure 2). The shape and degree of this effect, called “gravitational lensing”, is impacted by the distribution of dark matter in the lensing galaxies. WIMPs and axions are predicted to distribute themselves slightly differently, so gravitational lensing can put the competing theories to the test.

2 Seeing quadruple

Lensing effects around six astronomical objects

Galaxies and galaxy clusters can bend the light coming from bright background objects such as quasars, creating magnified images. If the lensing effect is strong, as in these images, we may even observe multiple images of a single quasar. The top right image shows quasar HS 0810+2554 (see figure 4).

If dark matter is WIMPs, then they will form a dense clump at the centre of a galaxy, smoothly dispersing with increasing distance. Axions, however, operate differently. “Because axions are so light, quantum effects become more important,” says Chadha-Day.

These effects should show up on large scales – the axion halo around a galaxy is predicted to exhibit long-range quantum interference patterns, with the density fluctuating in peaks and troughs thousands of light-years across.

Gravitational lensing could potentially be used to reveal these patterns, using something called the “critical curve”. Think of a gravitational lens as a series of lines where space has been warped by matter, like on a map where the contour lines indicate height. The critical curve is where the contours bunch up the most (figure 3).

3 Cosmic cartography

Gravitational lensing around the Abell 1689 galaxy cluster

Gravitational lensing around the Abell 1689 galaxy cluster. Red lines indicate the critical curve where magnification is infinite and yellow contours indicate the regions of the sky where objects are magnified by more than a factor of 10.

Critical curves “are lines of sight in the universe where you get enormous magnification in gravitational lensing, and they have different patterns depending on whether dark matter is WIMPs or axions”, says Massey. With axions, the quantum interference pattern can render the critical curve wavy.

In 2023 a team led by Alfred Amruth of the University of Hong Kong found some evidence of wavy effects in the critical curve. They studied the quasar HS 0810+2554 – the incredibly luminous core of a distant galaxy that is being gravitationally lensed (we can see four images of it from Earth) by a foreground object. They found that the lensing pattern could be better explained by axions than WIMPs (figure 4), though because they only studied one system, this is far from a slam dunk for axions.

Dark-matter interactions

Massey prefers not to tie himself to any one particular model of dark matter, instead opting to take a phenomenological approach. “I look to test whether dark-matter particles can interact with other dark-matter particles,” he says. Measuring how much dark matter interacts with itself (another kind of cross section) can be used to narrow down its properties.

4 Making waves

Comparison of the shapes of gravitational lenses from four models of dark matter

The shape of a gravitational lens would change depending on whether dark matter is WIMPs or axions. Alfred Amruth and colleagues developed a model of the gravitational lensing of quasar HS 0810+2554 (see figure 2). Light from the quasar is bent around a foreground galaxy, and the shape of the gravitational lensing depends on the properties of the dark matter in the galaxy. The researchers tested models of both WIMP-like and axion-like dark matter.

The colours indicate the amount of magnification, with the light blue lines representing the critical curves of high magnification. Part a shows a model of WIMP-like dark matter, whereas b, c and d show different models of axionic dark matter. Whereas the WIMP-like critical curve is smooth, the interference between the wavelike axion particles makes the critical curve wavy.

The best natural laboratories in which to study dark matter interacting with itself are galaxy cluster collisions, where vast quantities of matter and, theoretically, dark matter collide. If dark-matter halos are interacting with each other in cluster collisions, then they will slow down, but how do you measure this when the objects in question are invisible?

“This is where the bits of ordinary matter are actually useful,” says Massey. Cluster collisions contain both galaxies and clouds of intra-cluster hydrogen. Using gravitational lensing, scientists can work out where the dark matter is in relation to these other cosmic objects, which can be used to work out how much it is interacting.

The galaxies in clusters are so widely spaced that they sail past each other during the collision. By contrast, intra-cluster hydrogen gas clouds are so vast that they can’t avoid each other, and so they don’t move very far. If the dark matter doesn’t interact with itself, it should be found out with the galaxies. If the interaction is strong, however, it will be located with the hydrogen clouds. If it interacts just a bit, then the dark matter will be somewhere in-between. Its location can therefore be used to estimate the interaction cross-section, and this value can be handed to theorists to test which dark-matter model best fits the bill.

High-altitude astronomy

The problem is that cluster collisions can take a hundred million years to run their course. What’s needed is to see galaxy cluster collisions at all stages, with different velocities, from different angles.

Enter SuperBIT – the Super Balloon-borne Imaging Telescope, on which Massey is the UK principal investigator. Reaching 40 km into the atmosphere while swinging beneath a super-pressure balloon provided by NASA, SuperBIT was a half-metre aperture telescope designed to map dark matter in as many galaxy-cluster collisions as possible to piece together the stages of such a collision.

SuperBIT flew five times, embarking on its first test flight in September 2015 (figure 5). “We would bring it back down, tinker with it, improve it and send it back up again, and by the time of the final flight it was working really well,” says Massey.

5 Far from home

Photo of the Earth taken from the superBIT telescope

The SuperBIT telescope took gravitational lensing measurements of cluster collisions to narrow down the properties of dark matter. This photo of the Earth was taken from SuperBIT during one of its five flights.

That final flight took place during April and May 2023, launching from New Zealand and journeying around the Earth five and a half times. The telescope parachuted to its landing site in Argentina, but while it touched down well enough, the release mechanism had frozen in the stratosphere and the parachute did not detach. Instead, the wind caught it and dragged SuperBIT across the landscape.

“It went from being aligned to within microns to being aligned within kilometres! The whole thing was just a big pile of mirrors and metal, gyroscopes and hard drives strewn across Argentina, and it was heart-breaking,” says Massey, who laughs about it now. Fortunately, the telescope had worked brilliantly and all the data had been downloaded to a remote drive before catastrophe struck.

As long as a detection remains elusive, the identity of dark matter will continue to be a sore point for astronomers and physicists

The SuperBIT team is working through that data now. If there is any evidence that dark-matter particles have collided, the resulting estimate of the interaction cross-section will point to specific theoretical models and rule out others.

Astronomical observations can guide us, but only a positive detection of a dark-matter particle in an experiment such as LZ will settle the matter. As long as a detection remains elusive, the identity of dark matter will continue to be a sore point for astronomers and physicists. It also keeps the door ajar for alternative theories, and proponents of modified Newtonian dynamics (MOND) are already trying to exploit those cracks, as we shall see in the third and final part of this series.

  • In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales

The post Dark matter’s secret identity: WIMPs or axions? appeared first on Physics World.

]]>
Feature Keith Cooper explores rival theories, ambitious experiments and the ongoing race to understand why so much of the universe is invisible https://physicsworld.com/wp-content/uploads/2024/06/2024-06-Cooper-heic0818a.jpg newsletter
Waffle-shaped solar evaporator delivers durable desalination https://physicsworld.com/a/waffle-shaped-solar-evaporator-delivers-durable-desalination/ Tue, 25 Jun 2024 08:00:15 +0000 https://physicsworld.com/?p=115230 A novel solar distiller design prevents salt crystallization to provide cost-effective and durable water purification

The post Waffle-shaped solar evaporator delivers durable desalination appeared first on Physics World.

]]>
Water is a vital resource to society and is one of the main focus areas for the United Nations Sustainable Development Goals. However, around two thirds of the world still doesn’t have regular access to freshwater – with people in this category facing water scarcity for at least a month each year.

Alongside, every two minutes a child dies from water-, sanitation- and hygiene-related diseases; and freshwater sources are becoming ever more polluted, causing further stress on water supplies. With many water-related challenges around the world, new ways of producing freshwater are being sought. In particular, solar steam-based desalination methods are seen as a green way of producing potable water from seawater.

Solar steam generation a promising approach

There are various water treatment technologies available today, but one that has gathered a lot of attention lately is solar steam generation. Interfacial solar absorbers convert solar energy into heat to remove the salt from seawater and produce freshwater. By localizing the absorbed energy at the surface, interfacial solar absorbers reduce heat loss to bulk water.

Importantly, solar absorbers can be used off-grid and in remote regions, where potable water access is the most unreliable. However, many of these technologies cannot yet be made at scale because of salt crystallization on the solar absorber, which reduces both the light absorption and the surface area of the interface. Over time, the solar absorption capabilities become reduced and the supply of water becomes obstructed.

Quasi-waffle design could prevent crystallization

To combat the salt crystallization challenge, researchers in China have developed a waffle-shaped solar evaporator (WSE). The WSE is made of a graphene-like porous monolith, fabricated via a zinc-assisted pyrolysis route using biomass and recyclable zinc as the precursor materials.

First authors Yanjun Wang and Tianqi Wei from Nanjing University and their colleagues designed the WSE with a basin and ribs, plus extra sidewalls (that conventional plane-shaped solar evaporators don’t have) to drive the Marangoni effect in the device. The Marangoni effect is the flow of fluid from regions with low surface tension to those of high surface tension. Marangoni effects can be induced by both gradients in solute concentration or in temperature – and the WSE’s extra sidewalls trigger both effects.

Schematic of waffle-shaped solar evaporator

When the saltwater evaporates, the faster evaporation and more efficient heat consumption on the plateaus than in the basins creates gradients in solute concentration and temperature. Based on these gradients, the sidewalls then generate a surface-tension gradient, which induces solute- and temperature-driven Marangoni flows in the same direction.

The two Marangoni effects increase the convection of fluid in the device, accelerating the transport of salt ions and diluting the maximum salinity of the system below the critical saturation value – therefore preventing salt crystallization from occurring. This leads to continuous salt rejection with reduced fouling at the interface.

The WSE delivers a solar absorption of 98.5% and high evaporation rates of 1.43 kg/m2/h in pure water and 1.40 kg/m2/h in seawater. In an outdoor experiment using a prototype WSE to treat a brine solution, the device produced freshwater at up to 2.81 l/m2 per day and exhibited continuous operation for 60 days without requiring cleaning.

The WSE’s ability to alleviate the salt crystallization issues, combined with its cost-efficiency, means that the device could theoretically be commercially scalable in the future.

Overall, the WSE overcomes the three main obstacles faced when designing solar desalination devices: efficient water evaporation and condensation, and preventing salt fouling. While the device achieved a high desalination stability (evident from the long cleaning cycles), the evaporation rate is currently restricted by the upper limits of a single-stage evaporator. The researchers point out that introducing a multistage evaporator to the system could help improve the solar-to-water efficiency and the freshwater yield of the device. They are now designing such a multistage evaporator to further their current research.

The findings are reported in Science Advances.

The post Waffle-shaped solar evaporator delivers durable desalination appeared first on Physics World.

]]>
Research update A novel solar distiller design prevents salt crystallization to provide cost-effective and durable water purification https://physicsworld.com/wp-content/uploads/2024/06/ocean-and-blue-sky-673314064-iStock_lena5.jpg newsletter