The U.S. military is researching possible designs for a new generation of stealthier, faster, more mobile tanks.
For the past 100 years of mechanised warfare, protection for ground-based armoured fighting vehicles and their occupants has boiled down almost exclusively to a simple equation: more armour equals more protection. Weapons’ ability to penetrate armour, however, has advanced faster than armour’s ability to withstand penetration. As a result, achieving even incremental improvements in crew survivability has required significant increases in vehicle mass and cost.
The trend of increasingly heavy, less mobile and more expensive combat platforms has limited Soldiers’ and Marines’ ability to rapidly deploy and manoeuvre in theatre and accomplish missions in varied and evolving threat environments. Moreover, larger vehicles are limited to roads, as well as requiring more logistical support and are more expensive to design, develop, field and replace. The U.S. military has now reached a point where – considering tactical mobility, strategic mobility, survivability and cost – innovative and disruptive solutions are necessary for a new generation of armoured fighting vehicles.
The Defense Advanced Research Projects Agency (DARPA) has created the Ground X-Vehicle Technology (GXV-T) program to overcome these challenges. GXV-T seeks to investigate revolutionary ground-vehicle technologies that would simultaneously improve the mobility and survivability of vehicles through means other than adding more armour – i.e. avoiding detection, engagement and hits by adversaries. This improved stealth and mobility would enable future U.S. ground forces to more efficiently and cost-effectively tackle the varied and unpredictable combat situations of the 21st century.
“GXV-T’s goal is not just to improve or replace one particular vehicle – it’s about breaking the ‘more armour’ paradigm and revolutionising protection for all armoured fighting vehicles,” says Kevin Massey, DARPA program manager. “Inspired by how X-plane programs have improved aircraft capabilities over the past 60 years, we plan to pursue groundbreaking fundamental research and development to help make future armoured fighting vehicles significantly more mobile, effective, safe and affordable.”
Technical goals include the following improvements relative to today’s armoured fighting vehicles:
Reduce vehicle size and weight by 50 percent
Reduce onboard crew needed to operate vehicle by 50 percent
Increase vehicle speed by 100 percent
Access 95 percent of terrain
Reduce signatures that enable adversaries to detect and engage vehicles
DARPA says these four technical areas are examples of where advanced technologies could be developed that would meet the program’s objectives:
Radically enhanced mobility – ability to traverse diverse off-road terrain, including slopes and various elevations; advanced suspensions and novel track/wheel configurations; extreme speed; rapid omnidirectional movement changes in three dimensions
Survivability through agility – autonomously avoid incoming threats without harming occupants through technologies such as agile motion (dodging) and active repositioning of armour
Crew augmentation – improved physical and electronically assisted situational awareness for crew and passengers; semi-autonomous driver assistance and automation of key crew functions, similar to capabilities found in modern commercial airplane cockpits
Signature management – reduction of detectable signatures, including visible, infrared (IR), acoustic and electromagnetic (EM)
DARPA aims to develop GXV-T technologies over a period of 24 months, from 2015 to 2017.
Today – 19th August – is the date when our ecological footprint exceeds our planet's budget for this year.
It has taken less than eight months for humanity to use up nature’s entire budget for the year and go into "ecological overshoot" – according to data from the Global Footprint Network (GFN), an international sustainability think tank with offices in North America, Europe and Asia.
Global Footprint Network monitors humanity’s demand on the planet (ecological footprint) against nature’s biocapacity, i.e. its ability to replenish the planet's resources and absorb waste, including CO2. Earth Overshoot Day marks the date when humanity's footprint in a given year exceeds what Earth can regenerate in that year. Since the year 2000, overshoot has grown, according to GFN’s calculations. Consequently, Earth Overshoot Day has moved from 1st October in 2000 to 19th August this year.
"Global overshoot is becoming a defining challenge of the 21st century. It is both an ecological and an economic problem," says Mathis Wackernagel, president of the GFN and co-creator of the resource accounting metric. "Countries with resource deficits and low income are exceptionally vulnerable. Even high-income countries that have had the financial advantage to shield themselves from the most direct impacts of resource dependence need to realise that a long-term solution requires addressing such dependencies before they turn into a significant economic stress."
In 1961, humanity used just three-quarters of the biocapacity Earth had available that year for generating food, fibre, timber, fish stock and absorbing greenhouse gases. Most countries had biocapacities larger than their own respective footprints. By the early 1970s, economic and demographic growth had increased humanity’s footprint beyond what the planet could renewably produce. We went into ecological overshoot.
Today, 86 percent of the world's population lives in countries that demand more from nature than their own ecosystems can renew. According to the GFN's calculations, it would take 1.5 Earths to produce the renewable resources necessary to support humanity’s current footprint. Future trends in population, energy, food and other resource consumption indicate this will rise to three planets by the 2050s, which could be physically unfeasible.
The costs of our ecological overspending are becoming more evident by the day. The "interest" we are paying on our mounting ecological debt – in the form of deforestation, freshwater scarcity, soil erosion, biodiversity loss and the build-up of CO2 in our atmosphere – also comes with mounting human and economic costs.
Governments who ignore resource limits in their decision-making put their long-term economic performance at risk. In times of persistent overshoot, countries running biocapacity deficits will find that reducing their resource dependence is aligned with their self-interest. Conversely, countries that are endowed with biocapacity reserves have an incentive to preserve these ecological assets that constitute a growing competitive advantage in a world of tightening ecological constraints.
More and more countries are taking action in a variety of ways. The Philippines is on track to adopt the GFN's Ecological Footprint at the national level – the first country in Southeast Asia to do so – via its National Land Use Act. This policy, the first of its kind in the Philippines, is designed to protect areas from haphazard development and plan for the country's use and management of its own physical resources. Legislators are seeking to integrate the Ecological Footprint metric into this national policy, putting resource limits at the centre of decision-making.
The United Arab Emirates (UAE), a high-income country, intends to significantly reduce its per capita Ecological Footprint – one of the world’s highest – starting with carbon emissions. Its Energy Efficiency Lighting Standard will result in only energy-efficient indoor-lighting products being made available throughout the territory before the end of this year.
Morocco wants to collaborate with the Global Footprint Network on a review of the nation’s 15-year strategy for sustainable development in agriculture – Plan Maroc Vert – through the lens of the Ecological Footprint. Specifically, Morocco is interested in comprehensively assessing how the plan contributes to the sustainability of the agriculture sector, as well as a society-wide transition towards sustainability.
Regardless of a nation’s specific circumstances, incorporating ecological risk into economic planning and development strategy is not just about foresight – it has become an urgent necessity.
Laser physicists have found a way to make atomic-force microscope probes 20 times more sensitive and capable of detecting forces as small as the weight of an individual virus.
The technique – developed by researchers in the Quantum Optics Group of the Australian National University, Canberra – uses laser beams to cool a nanowire probe to minus 265 degrees Celsius.
“The level of sensitivity achieved after cooling is accurate enough for us to sense the weight of a large virus, 100 billion times lighter than a mosquito,” said Professor Ping Koy Lam, the leader of the Quantum Optics Group.
This could be used to improve the resolution of atomic-force microscopes, which are state-of-the-art tools for measuring nanoscopic structures and the tiny forces between molecules. Atomic force microscopes can achieve ultra-sensitive measurements of microscopic features by scanning a wire probe over a surface. However, such probes – around 500 times finer than a human hair – are prone to vibration.
“At room temperature the probe vibrates, just because it is warm, and this can make your measurements noisy,” said co-author Dr Ben Buchler. “We can stop this motion by shining lasers at the probe.”
Credit: Quantum Optics Group, Australian National University
The force sensor, pictured above, was a 200 nm-wide silver gallium nanowire coated with gold.
“The laser makes the probe warp and move due to heat. But we have learned to control this warping effect, and were able to use the effect to counter the thermal vibration of the probe,” said Giovanni Guccione, a PhD student on the team.
However, the probe cannot be used while the laser is on, as the laser effect overwhelms the sensitive probe. So the laser has to be turned off and any measurements quickly made before the probe heats up within a few milliseconds. By making measurements over a number of heating/cooling cycles, accurate values can be determined.
“We now understand this cooling effect really well,” says Harry Slatyer, another PhD student. “With clever data processing, we might be able to improve the sensitivity, and even eliminate the need for a cooling laser.”
A huge, self-organising robot swarm consisting of 1,024 individual machines has been demonstrated by Harvard.
Swarm robotics is a new and emerging field of technology involving the coordination of multiple robots to perform a group task. By combining a large number of machines, it is possible to create a hive intelligence – capable of much greater achievements than a lone individual. In the same way that insects such as ants, bees and termites cooperate, researchers can build wireless networks of machines able to sense, navigate and communicate information about their surroundings.
Recent efforts have included a formation of 20 "droplets" created by the University of Colorado, a group of 40 robots developed at the Sheffield Centre for Robotics, and drones using augmented reality to produce "spatially targeted communication and self-assembly". Although impressive, those projects – and others since – have lacked the raw numbers to be considered a genuine "swarm" like the creatures mentioned earlier. This week, however, scientists at Harvard took research in the field to a whole new level, by demonstrating a network of more than 1,000 machines working simultaneously.
Known as "Kilobots", these devices are just a few centimetres across, roughly the size of a U.S. quarter. Each is equipped with tiny vibrating motors allowing them to slide across a surface, using an infrared transmitter and receiver to alert their neighbours and measure their proximity. From just a simple command, they can arrange themselves into a variety of complex shapes and patterns.
In 2011, open-source hardware and software was developed and licensed by Harvard to improve the algorithms used in machine networks. A report showed how groups of 25 Kilobots – demonstrating behaviours such as foraging, formation control and synchronisation – had the potential for much bigger numbers. Following three years of further testing and experimentation, the university has now succeeded in coordinating a swarm of 1,024 units.
The new, smarter algorithm enables the Kilobots to correct their own mistakes, avoiding traffic jams and errors that would otherwise become more likely in larger-scale groups. If an individual deviates off-course, nearby robots can sense the problem and cooperate to fix it. As robots become cheaper and more numerous, with a continued trend in miniaturisation, this form of social behaviour could lead to revolutionaryapplicationsin the future.
As Professor Radhika Nagpal explains in a press release: “Increasingly, we’re going to see large numbers of robots working together – whether it's hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways. Understanding how to design ‘good’ systems at that scale will be critical. We can simulate the behaviour of large swarms of robots, but a simulation can only go so far. The real-world dynamics – the physical interactions and variability – make a difference, and having the Kilobots to test the algorithm on real robots has helped us better understand how to recognise and prevent the failures that occur at these large scales.”
These latest developments are reported in the peer-reviewed journal Science.
From next week, guests at the Aloft hotel chain may feel like they are living in the future, as a new robotic butler offers its services.
Aloft Hotels has announced A.L.O. as the company’s first “Botlr” (robotic butler). This futuristic service will be introduced on 20th August, making Aloft the first major hotel brand to hire a robot for both front and back of house duties.
In this role, A.L.O. will be on call 24/7 as a robotic operative, assisting the human staff in delivering amenities to guest rooms. Professionally “dressed” in a custom shrink-wrapped, vinyl collared uniform and nametag, A.L.O. can modestly accept tweets as tips. It will not only free up time for employees, allowing them to create a more personalised experience for guests, but will also enhance the hotel’s image and technological features.
Brian McGuinness, Global Brand Leader: “As you can imagine, hiring for this particular position was a challenge as we were seeking a very specific set of automated skills, and one that could work – literally – around the clock. As soon as A.L.O. entered the room, we knew it was what we were looking for. A.L.O. has the work ethic of Wall-E, the humour of Rosie from The Jetsons and reminds me of my favourite childhood robot – R2-D2. We are excited to have it join our team.”
A.L.O. was developed by Silicon Valley-based Savioke – a new startup company with funding from Google Ventures – which the robotics community has been eagerly anticipating. It uses a combination of sonar wave technology, lasers and cameras to avoid people and obstacles. It can facilitate and prioritise multiple guest deliveries, communicate easily with guests and various hotel platforms, and efficiently navigate throughout the property – including the elevator, using WiFi.
Steve Cousins, CEO of Savioke: “We are thrilled to introduce our robot to the world today through our relationship with Aloft Hotels. In our early testing, all of us at Savioke have seen the look of delight on those guests who receive a room delivery from a robot. We have also seen the front desk get busy at times, and expect Botlr will be especially helpful at those times, freeing up human talent to interact with guests on a personal level.”
The first A.L.O. reports for duty next week at Aloft Cupertino, next to the Apple HQ. If successful, all 100 of the company's hotels may introduce them during 2015. In the future, Cousins predicts a huge market for service robots like A.L.O.: “There are all these places, hotels, elder care facilities, hospitals, that have a few hundred robots maybe – but no significant numbers – and we think that's just a huge opportunity.”
A new polymer that could help to absorb man-made emissions from power plants has been announced by the American Chemical Society.
A sponge-like plastic that soaks up the greenhouse gas carbon dioxide (CO2) might ease our transition away from polluting fossil fuels and toward new energy sources, such as hydrogen. The material — a relative of the plastics used in food containers — could play a role in the U.S. government's plan to cut CO2 emissions 30 percent by 2030, and could also be integrated into power plant smokestacks in the future. A report on the new material is one of nearly 12,000 presentations at the 248th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, taking place in San Francisco this week.
“The key point is that this polymer is stable, it’s cheap, and it adsorbs CO2 extremely well,” says Andrew Cooper, Ph.D. “It’s geared toward function in a real-world environment. In a future landscape where fuel-cell technology is used, this adsorbent could work toward zero-emission technology.”
Adsorbents are most commonly used to remove greenhouse gas pollutants from smokestacks at power plants where fossil fuels are burned. However, Cooper and his team intend this adsorbent — a microporous organic polymer — for a different application. The new material would become part of an emerging technology called integrated gasification combined cycle (IGCC), which can convert fossil fuel into hydrogen gas. Hydrogen holds great promise for use in fuel-cell cars and electricity generation, because it produces almost no pollution. IGCC is a bridging technology that is intended to jump-start the hydrogen economy, or the transition to hydrogen fuel, while still using the existing fossil-fuel infrastructure. But the IGCC process yields a mixture of hydrogen and CO2 gas, which must be separated.
Cooper, who is from the University of Liverpool, claims that the sponge works best under the high pressures intrinsic to the IGCC process. Just like a kitchen sponge swells when it takes on water, the adsorbent swells slightly when it soaks up CO2 in the tiny spaces between its molecules. When the pressure drops, the adsorbent deflates and releases the CO2, which can then be collected for storage or conversion into useful compounds.
The material — a brown, sand-like powder — is made by linking together many small carbon-based molecules into a network. Cooper explains that the idea to use this structure was inspired by polystyrene, a plastic used in styrofoam and other packaging material. Polystyrene can adsorb small amounts of CO2 by the same swelling action.
One advantage of using polymers is that they tend to be very stable. The material can even withstand being boiled in acid, proving it should tolerate the harsh conditions in power plants where CO2 adsorbents are needed. Other CO2 scrubbers — whether created from plastics or metals or in liquid form — do not always hold up well, he says. Another benefit of this new adsorbent is its ability to adsorb CO2 without also taking on water vapour, which can clog up other materials and make them less effective. Its low cost, reusability, and long lifetime also makes the sponge polymer attractive. In his report, Cooper also describes how it is relatively simple to embed the spongy polymers in the kinds of membranes already being evaluated to remove CO2 from power plant exhaust. Combining two types of scrubbers could make even better adsorbents, by harnessing the strengths of each.
Worldwide, stroke is among the leading causes of death, killing over 6.2 million people each year. A new therapy using stem cells extracted from bone marrow has shown promising results in the first trial of its kind in humans.
Five patients received stem cell treatment in a pilot study conducted by Imperial College Healthcare NHS Trust and scientists at Imperial College London. The therapy was found to be safe, with all patients showing improvements in clinical measures of disability. These findings are published in the journal Stem Cells Translational Medicine. It is the first UK human trial of a stem cell treatment for acute stroke to be published.
The therapy uses a type of cell called CD34+ cells, a set of stem cells in the bone marrow that give rise to blood cells and blood vessel lining cells. Previous research has demonstrated that treatment using these cells can significantly improve recovery from stroke in animals. Rather than developing into brain cells themselves, the cells are thought to release chemicals that trigger the growth of new brain tissue and new blood vessels in the area damaged by stroke.
The patients were treated within seven days of a severe stroke – in contrast to several other stem cell trials, most of which have treated patients after six months or later. The Imperial researchers believe early treatment may improve the chances of a better recovery. A bone marrow sample was taken from each patient. The CD34+ cells were isolated from the sample and then infused into an artery that supplies the brain. No previous trial has selectively used CD34+ cells, so early after the stroke, until now.
Although the trial was mainly designed to assess the safety and tolerability of the treatment, the patients all showed improvements in their condition in clinical tests over a six-month follow-up period. Four out of five patients had the most severe type of stroke: only four per cent of people who experience this kind of stroke are expected to be alive and independent six months later. In the trial, all four of these patients were alive and three were independent after six months.
Dr Soma Banerjee, a lead author and Consultant in Stroke Medicine at Imperial College Healthcare NHS Trust, said: “This study showed that the treatment appears to be safe and that it’s feasible to treat patients early when they might be more likely to benefit. The improvements we saw in these patients are very encouraging, but it’s too early to draw definitive conclusions about the effectiveness of the therapy. We need to do more tests to work out the best dose and timescale for treatment before starting larger trials.”
Worldwide, stroke was the second most frequent cause of death in 2011, accounting for 6.2 million deaths (~11% of the total). The incidence of stroke increases exponentially from 30 years of age. Advanced age is among the most significant risk factors, with two-thirds of strokes occurring in those over the age of 65. However, stroke can occur at any age, including in childhood. Survivors can be affected by a wide range of mental and physical symptoms, and many never recover their independence. Stem cell therapy is seen as an exciting new potential avenue of treatment for stroke, but its exact role is yet to be clearly defined.
Dr Paul Bentley, also a lead author of the study, from the Department of Medicine at Imperial College London: “This is the first trial to isolate stem cells from human bone marrow and inject them directly into the damaged brain area using keyhole techniques. Our group are currently looking at new brain scanning techniques to monitor the effects of cells once they have been injected.”
Professor Nagy Habib, Principal Investigator of the study, from the Department of Surgery and Cancer at Imperial College London: "These are early but exciting data worth pursuing. Scientific evidence from our lab further supports the clinical findings and our aim is to develop a drug, based on the factors secreted by stem cells, that could be stored in the hospital pharmacy so that it is administered to the patient immediately following the diagnosis of stroke in the emergency room. This may diminish the minimum time to therapy and therefore optimise outcome. Now the hard work starts to raise funds for this exciting research.”
Scientists at IBM Research have created a neuromorphic (brain-like) computer chip, featuring 1 million programmable neurons and 256 million programmable synapses.
IBM this week unveiled "TrueNorth" – the most advanced and powerful computer chip of its kind ever built. This neurosynaptic processor is the first to achieve one million individually programmable neurons, sixteen times more than the current largest neuromorphic chip. Designed to mimic the structure of the human brain, it represents a major departure from older computer architectures of the last 70 years. By merging the pattern recognition abilities of neurosynaptic chips with traditional system layouts, researchers aim to create "holistic computing intelligence".
Measured by device count, TrueNorth is the largest IBM chip ever fabricated, with 5.4 billion transistors at 28nm. Yet it consumes under 70 milliwatts while running at biological real time – orders of magnitude less power than a typical modern processor. This amazing feat is made possible because neurosynaptic chips are event driven, as opposed to the "always on" operation of traditional chips. In other words, they function only when needed, resulting in vastly less energy use and a much cooler temperature. It is hoped this combination of ultra-efficient power consumption and entirely new system architecture will allow computers to far more accurately emulate the brain.
TrueNorth is composed of 4,096 cores, with each of these modules integrating memory, computation and communication. The cores are distributed in a parallel, flexible and fault-tolerant grid – able to continue operating when individual cores fail, similar to a biological system. And – like a brain cortex – adjacent TrueNorth chips can be seamlessly tiled and scaled up. To demonstrate this scalability, IBM also revealed a 16-chip motherboard with 16 million programmable neurons: roughly equivalent to a frog brain.
Each of these "neurons" features 256 inputs, whereas the human brain averages 10,000. That may sound like a huge difference – but in the world of computers and technology, progress tends to be exponential. In other words, we could see machines as computationally powerful as a human brain within 10–15 years. The implications are staggering. When sufficiently scaled up, this new generation of "cognitive computers" could transform society, leading to a myriad of applications able to intelligently analyse visual, auditory, and multi-sensory data.
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analysing microscopic vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a crisp packet, photographed from 15 feet away through sound-proof glass.
In other experiments, the researchers extracted useful audio signals from videos of aluminium foil, the surface of a glass of water, and even the leaves of a potted plant. Their findings are presented at this year’s SIGGRAPH, the world's largest conference on computer graphics and interactive techniques.
“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realise that this information was there.”
Reconstructing audio from video requires that the frequency of the video samples — the number of frames of video captured per second — be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera able to capture 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.
In other experiments, however, they used an ordinary digital camera. Because of a quirk in the design of most cameras’ sensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second. While this audio reconstruction wasn’t as faithful as that with the high-speed camera, it may still be good enough to identify the gender of a speaker in a room; the number of speakers — and even, given accurate enough information about the acoustic properties of speakers’ voices — their identities.
The researchers’ technique has obvious applications in law enforcement and forensics, but Davis is more enthusiastic about the possibility of what he describes as a new kind of imaging: “We’re recovering sounds from objects. That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.”
In their experiments, the researchers have been measuring the material, mechanical, and structural properties of objects based on motions less than a tenth of a micrometre in size. That corresponds to 1/5000th of a pixel in close-up images — but it's possible to infer motions smaller than a pixel by looking at the way a single pixel’s colour value fluctuates over time.
“This is new and refreshing. It’s the kind of stuff that no other group would do right now,” says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. “We’re scientists, and sometimes we watch these movies, like James Bond, and we think, ‘This is Hollywood theatrics. It’s not possible to do that. This is ridiculous.’ And suddenly, there you have it. This is totally out of some Hollywood thriller. You know that the killer has admitted his guilt because there’s surveillance footage of his potato chip bag vibrating.”
However, technology of this kind may raise concerns over privacy in the future — particularly with ongoing, exponential advances in screen resolution, computer power and sensing abilities. Imagine a miniaturised version, for instance, able to be incorporated into glasses or even bionic eyes. The use of surveillance drones and high-definition CCTV will also increase greatly in the coming years. Looking at the more distant future, the algorithms will be orders of magnitude more accurate and detailed, possibly combined with X-ray camera vision to peer through walls and other intervening obstacles. Perhaps by then, we will enter a world in which privacy becomes a thing of the past.
Carbon reduction efforts by airlines will be outweighed by growth in air traffic, even if the most contentious mitigation measures are implemented, according to new research by the University of Southampton.
Even if proposed mitigation measures are agreed upon and put in place, air traffic growth rates are likely to outpace emission reductions, unless demand is substantially reduced.
"There is little doubt that increasing demand for air travel will continue for the foreseeable future," says Professor John Preston, travel expert and study co-author. "As a result, civil aviation is going to become an increasingly significant contributor to greenhouse gas emissions."
The authors of the new study – which is published in the journal Atmospheric Environment – have calculated that the ticket price increase necessary to drive down demand would value CO2 emissions at up to one hundred times the amount of current valuations.
"This would translate to a yearly 1.4 per cent increase on ticket prices, breaking the trend of increasing lower airfares," says co-author and researcher Matt Grote. "The price of domestic tickets has dropped by 1.3 per cent a year between 1979 and 2012, and international fares have fallen by 0.5 per cent per annum between 1990 and 2012."
However, the research suggests any move to suppress demand would be resisted by the airline industry and national governments. The researchers say a global regulator ‘with teeth’ is urgently needed to enforce CO2 emission reduction measures.
"Some mitigation measures can be left to the aviation sector to resolve," says Professor Ian Williams, Head of the Centre for Environmental Science at the University of Southampton. "For example, the industry will continue to seek improvements to fuel efficiency as this will reduce costs. However, other essential measures, such as securing international agreements, setting action plans, regulations and carbon standards will require political leadership at a global level."
The literature review conducted by the researchers suggests that the UN's International Civil Aviation Organisation (ICAO) "lacks the legal authority to force compliance and therefore is heavily reliant on voluntary cooperation and piecemeal agreements".
Current targets, set at the most recent ICAO Assembly Session last October, include a global average fuel-efficiency improvement of two per cent a year (up to 2050) and keeping global net CO2 emissions for international aviation at the same level from 2020. Global market based measures (MBM) have yet to be agreed upon, while Boeing predicts the number of aircraft in service to double between the years 2011 and 2031.
Scientists at the Luxembourg Centre for Systems Biomedicine (LCSB) have grafted neurons reprogrammed from skin cells into the brains of mice for the first time with long-term stability. Six months after implantation, the neurons had become fully functional and integrated into the brain. This successful demonstration of lastingly stable neuron implantation raises hope for future therapies in humans that could replace sick neurons with healthy ones in the brains of Parkinson’s disease patients, for example.
The LCSB research group led by Prof. Jens Schwamborn and Kathrin Hemmer is working continuously to bring cell replacement therapy to maturity as a treatment for neurodegenerative diseases. The path towards successful therapy in humans, however, is long. “Successes in human therapy are still a long way off, but I am sure successful cell replacement therapies will exist in future. Our research results have taken us a step further in this direction,” claims Prof. Schwamborn.
Credit: Luxembourg Centre for Systems Biomedicine (LCSB)
In their latest tests, the research group succeeded in creating stable nerve tissue in the brain from neurons that had been reprogrammed from skin cells. The stem cell researchers’ technique of producing neurons – or more specifically, induced neuronal stem cells (iNSC) – in a petri dish from the host’s own skin cells greatly improves the compatibility of the implanted cells. The treated mice showed no adverse side effects, even six months after implantation into the hippocampus and cortex regions of the brain. In fact it was quite the opposite – the implanted neurons were fully integrated into the complex network of the brain. The neurons exhibited normal activity and were connected to the original brain cells via newly formed synapses, the contact points between nerve cells.
These tests demonstrate that the scientists are continually gaining a better understanding of how to treat such cells in order to successfully replace damaged or dead tissue. “Building upon the current insights, we will now be looking specifically at the type of neurons that die off in the brain of Parkinson’s patients – namely the dopamine-producing neurons,” Schwamborn reports.
In the future, implanted neurons could produce the lacking dopamine directly in the patient’s brain and transport it to the appropriate sites. This could result in an actual cure, as has so far been impossible. The researchers have published their results in the current issue of Stem Cell Reports.
As we reported last month, the European Space Agency's Rosetta probe has been nearing its destination: the icy comet 67P/Churyumov-Gerasimenko. After a journey of ten years, five months and four days – covering a distance of 4 billion miles (6.4 billion km) – it has finally arrived in orbit. The journey involved looping around the Sun five times, followed by a series of ten rendezvous manoeuvres that began in May to adjust its speed and trajectory to gradually match those of the comet, which is rushing towards the inner Solar System at nearly 34,000 mph (55,000 km/h). If any of those manoeuvres had failed, the mission would have been lost, and the spacecraft would simply have flown by the comet.
ESA's Director of Science and Robotic Exploration, Alvaro Giménez: "Today's achievement is a result of a huge international endeavour spanning several decades. We have come an extraordinarily long way since the mission concept was first discussed in the late 1970s and approved in 1993, and now we are ready to open a treasure chest of scientific discovery that is destined to rewrite the textbooks on comets for even more decades to come."
Rosetta will now perform a detailed study of the comet, identifying a target site for the Philae robotic lander. As many as five possible landing sites will be identified by late August, before the primary site is identified in mid-September. The final timeline for the sequence of events for deploying Philae – currently expected for 11th November – will be confirmed by the middle of October. After landing, Rosetta will continue to accompany the comet until its closest approach to the Sun in August 2015 and beyond, watching its behaviour from close quarters to provide a unique insight and real-time experience of how a comet works as it hurtles around the Sun. This could reveal new clues to the origins of the Solar System, our home planet and life itself.
All images credit: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA
Eating five daily portions of fruit and vegetables is associated with a lower risk of death from any cause, particularly from cardiovascular disease – but beyond five portions appears to have no further effect, finds a new study.
These results conflict with a recent study published in BMJ's Journal of Epidemiology and Community Health suggesting that seven or more daily portions of fruits and vegetables were linked to lowest risk of death.
There is growing evidence that increasing fruit and vegetable consumption is related to a lower risk of death from cardiovascular disease and cancer. However, the results are not entirely consistent. So a team of researchers based in China and the United States decided to examine the association between fruit and vegetable intake and risk of all-cause, cardiovascular, and cancer deaths.
They analysed the results of sixteen studies involving a total of 833,000 participants and 56,000 deaths. Differences in study design and quality were taken into account to minimise bias. Higher consumption of fruit and vegetables was significantly associated with a lower risk of death from all causes, particularly from cardiovascular diseases.
Average risk of death from all causes was reduced by 5% for each additional daily serving of fruit and vegetables, while risk of cardiovascular death was reduced by 4 percent for each additional daily serving of fruit and vegetables. But the researchers identified a threshold around five servings per day, after which the risk of death did not reduce further.
In contrast, a higher consumption of fruit and vegetables was not appreciably associated with risk of death from cancer. The researchers suggest that — as well as advice to eat adequate amounts of fruit and vegetables — the adverse effects of obesity, physical inactivity, smoking and high alcohol intake on cancer risk should be further emphasised.
Although a threshold of five servings was identified, the team reiterates the importance of regular fruit and vegetable intake, concluding that their study "provides further evidence that a higher consumption of fruits and vegetables is associated with a lower risk of mortality from all causes, particularly from cardiovascular diseases. The results support current recommendations to increase consumption of fruits and vegetables to promote health and longevity."
Tesla has reached an agreement with Panasonic to build a $5 billion "Gigafactory". This will produce more batteries than all other lithium-ion battery factories in the world combined, slashing costs by nearly one-third and boosting the adoption of electric vehicles.
Tesla Motors and Panasonic had been in talks for several months over a massive new factory to produce electric car batteries. This week, they signed an agreement to build the $5 billion facility. Dubbed the "Gigafactory," its location is still unknown – but sites are being evaluated in Arizona, California, Nevada, New Mexico and Texas. Tesla will be responsible for the land, buildings and utilities, while Panasonic will handle the equipment, manufacturing and supply side, based on their mutual approval.
Ground-breaking is planned to begin later this year, and the first batteries are expected to roll off the assembly line in 2017. It is hoped that by 2020, 500,000 battery cells will be produced each year; 35 GWh worth of cells and 50 GWh worth of packs. These will be used to power Tesla's Model S and Model X cars, along with a cheaper Model 3 sedan being introduced in 2017. The Model 3 is expected to be around $35,000 – half the cost of a Model S.
According to the press release, cost reductions at the Gigafactory will be driven by economies of scale previously impossible in battery cell production. Further savings will be achieved by manufacturing cells that have been optimised for electric vehicle design – both in size and function – by co-locating suppliers on-site to eliminate packaging, transportation and duty costs and inventory carrying costs, and by manufacturing at a location with lower utility and operating expenses. As shown in the rendering above, localised solar and wind turbines will be used to power the facility.
Tesla co-founder and CEO, Elon Musk, says there will eventually be a need for "several more" of these Gigafactories. Other efforts by Tesla to boost electric cars have included its revolutionary supercharger network, offering free high-speed charges in less than an hour. There are now more than 100 of these stations operating in the United States, with many more planned, covering 98 percent of the population by the end of 2015. Networks are also being established in Europe and Asia. The company released its patents in June this year, to encourage the spread of its technology. Future historians will surely look back on Elon Musk favourably.
A team in Denmark has broken the world record for single fibre data transmission, achieving a transfer rate of 43 terabits per second over a distance of 41 miles (67 km). They also report a speed of 1 petabit (1000 terabits) when combining multiple lasers.
In 2009, a research group at the Technical University of Denmark (DTU) was the first to break the 1 terabit barrier for data transfer. Their record was shattered in 2011, when the Karlsruhe Institute of Technology in Germany achieved 26 terabits per second. Now, DTU have regained the title, demonstrating 43 terabits per second (Tbps) through a single optical fibre. This is fast enough to download a 1GB file in about 0.0002 seconds – or the entire contents of a 1TB hard drive in 0.2 seconds.
The Danish team's effort may seem almost excessive, to the point of comedy. However, current trends show that insanely fast transfer speeds like this will be necessary in the relatively near future. Like a digital explosion, the Internet continues to expand and grow exponentially – doubling in size every two years. Improvements in video quality and image resolution mean the amount of data appearing online is mushrooming to enormous proportions, while at the same time, billions more people are gaining access to the web.
This also requires energy which currently generates about two percent of CO2 emissions. Therefore, it is essential to identify solutions for the Internet that make significant reductions in power consumption while simultaneously expanding the bandwidth.
DTU's researchers achieved their latest record by using a new type of optical fibre borrowed from the Japanese telecoms giant NNT. This type of fibre contains seven cores (glass threads) instead of the single core used in standard fibres, making it possible to transfer even more data. Despite the fact that it comprises seven cores, the new fibre does not take up any more space than the standard version.
As to when speeds in the tens of terabits range might be affordable to mainstream consumers, we reckon sometime in the 2030s.
NASA has announced the payload for its Mars 2020 rover mission, an upgraded version of the Curiosity rover currently exploring the Red Planet.
The next rover NASA will send to Mars in 2020 will carry seven instruments for unprecedented science and exploratory investigations. The agency confirmed the selected payload yesterday at its headquarters in Washington. Managers made their selections out of 58 proposals received in January from researchers and engineers worldwide. Proposals received were twice the usual number submitted for instrument competitions in the recent past. This is an indication of the extraordinary interest by the science community in the future exploration of Mars. The selected proposals have a total value of approximately $130 million for research and development.
The Mars 2020 mission will be based on the design of the highly successful Mars Science Laboratory rover – Curiosity – which landed in 2012 and is currently operating on Mars. The new rover will carry more sophisticated, upgraded hardware and new instruments to conduct geological assessments of the rover's landing site, determine the potential habitability of the environment, and directly search for signs of ancient Martian life. It will identify and store a collection of 30 rock and soil samples for return to Earth by a later mission. The rover will feature a new set of wheels, tougher and more durable than its predecessor, potentially boosting the mission lifespan.
"The Mars 2020 rover, with these new advanced scientific instruments – including those from our international partners – holds the promise to unlock more mysteries of Mars' past as revealed in the geological record," said John Grunsfeld, a former astronaut, and associate administrator of NASA's Science Mission Directorate in Washington. "This mission will further our search for life in the universe and also offer opportunities to advance new capabilities in exploration technology."
The Mars 2020 rover will also help to advance our knowledge of how future human explorers could use natural resources available on the surface of the Red Planet. An ability to live off the Martian land would transform future exploration of the planet. Designers of manned expeditions can use this mission to understand the hazards posed by Martian dust and demonstrate technology to process carbon dioxide from the atmosphere to produce oxygen. These experiments will help engineers learn how to use Martian resources to produce oxygen for human respiration and potentially for use as an oxidiser for rocket fuel.
"The 2020 rover will help answer questions about the Martian environment that astronauts will face and test technologies they need before landing on, exploring and returning from the Red Planet," said William Gerstenmaier, associate administrator for the Human Exploration and Operations Mission Directorate at the NASA Headquarters in Washington. "Mars has resources needed to help sustain life, which can reduce the amount of supplies that human missions will need to carry. Better understanding the Martian dust and weather will be valuable data for planning human Mars missions. Testing ways to extract these resources and understand the environment will help make the pioneering of Mars feasible."
The selected payload instruments are:
Mastcam-Z, an advanced hi-res camera system with panoramic, stereoscopic and zoom ability.
SuperCam, an instrument that can provide imaging, chemical composition analysis, and mineralogy. The instrument will also be able to detect the presence of organic compounds in rocks and regolith from a distance.
Planetary Instrument for X-ray Lithochemistry (PIXL), an X-ray fluorescence spectrometer that will also contain an imager with high resolution to determine the fine scale elemental composition of Martian surface materials. PIXL will provide capabilities that permit more detailed detection and analysis of chemical elements than ever before.
Scanning Habitable Environments with Raman & Luminescence for Organics and Chemicals (SHERLOC), a spectrometer that will provide fine-scale imaging and uses an ultraviolet (UV) laser to determine fine-scale mineralogy and detect organic compounds. SHERLOC will be the first UV Raman spectrometer to fly to the surface of Mars and will provide complementary measurements with other instruments in the payload.
The Mars Oxygen ISRU Experiment (MOXIE), a device that will produce oxygen from Martian atmospheric CO2, demonstrating a technology of critical importance in future manned exploration.
Mars Environmental Dynamics Analyzer (MEDA), a set of sensors that will provide measurements of temperature, wind speed and direction, pressure, relative humidity and dust size and shape.
Radar Imager for Mars' Subsurface Exploration (RIMFAX), a ground-penetrating radar that will provide centimetre-scale resolution of the geologic structure of the subsurface.
This announcement comes in the same week that an earlier 2004 rover – Opportunity – having travelled more than 25 miles (40 kilometres), has set a new "off-world" record as the rover having driven the greatest distance. It surpasses the previous record held by the Soviet Union's Lunokhod 2 rover that had travelled 24 miles (39 kilometres).