future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2014

 

2013

 

2012

 

2011

 

2010

 

 
     
     
     
 
       
   
 
     
 

28th July 2015

Autonomous weapons: an open letter from AI and robotics researchers

The International Joint Conference on Artificial Intelligence (IJCAI) is currently taking place in Buenos Aires, Argentina. Today an open letter was officially announced at the conference, warning against the dangers of killer robots and a military AI arms race.

 

129-mech-killer-robot-military-technology-future-timeline
© Kgermolaev | Dreamstime.com

 

The letter is signed by over a thousand experts in the AI field. In addition to these researchers, the signatories also include high profile names such as Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind's CEO Demis Hassabis, Professor Stephen Hawking, philosopher and cognitive scientist Daniel Dennett and Noam Chomsky who was voted the world's top public intellectual in a 2005 poll. It reads as follows:

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

 

The full list of signatories is available at http://futureoflife.org/AI/open_letter_autonomous_weapons.

This letter follows growing concern in recent years over the use of robots and AI. In 2013, a survey by the University of Massachusetts Amherst showed that a majority of Americans, across the political spectrum, opposed the outsourcing of lethal military and defence targeting decisions to machines.

Last year, a report by Human Rights Watch warned that fully autonomous weapons, or "killer robots," would jeopardise basic human rights, whether used in wartime or for law enforcement. Human Rights Watch is a founding member and coordinator of the Campaign to Stop Killer Robots – a coalition of 51 nongovernmental organisations calling for a preemptive ban on the development, production, and use of fully autonomous weapons.

 

  speech bubble Comments »
 

 

 

27th July 2015

China to lead the world's economy by 2026

China is set to edge ahead of the US in just over a decade, while India is expected to move up the rankings to third place – pushing Japan out of the world's top three economies.

 

china india usa 2026 2050 future timeline

 

China is expected to overtake the US in 2026 in nominal GDP terms to become the world's largest economy, and will maintain this position until at least 2050 according to The Economist Intelligence Unit (EIU).

In a new report, Long-term macroeconomic forecasts: Key Trends to 2050, which extends the EIU's economic forecast for 82 countries up to 2050, emerging markets are expected to grow faster than developed economies, and as a result countries such as China and India are likely to overtake current global leaders such as Japan and Western Europe.

The report finds that:

• China is expected to narrowly edge ahead of the US for the first time in 2026, with a nominal GDP of US$28.6trn versus the US's US$28.3trn.
• By 2050, China will boast a GDP of US$105.9trn, compared with the US's US$70.9trn.
• The UK will fall out of the world's top 5 economies by 2026.
• Indonesia and Mexico will rank among the top ten economies at market exchange rates by 2050, overtaking economies such as Italy and Russia.
• Asia will continue its rise and by 2050 will represent 53% of global GDP, compared with from 32% in 2014.

 

china india usa 2026 2050 future timeline

 

Yet in terms of individual spending power, today's advanced economies are likely to continue to dominate. Emerging economies such as China, India and Indonesia are projected to see levels of consumer spending to rise significantly by 2050, but at best will represent 50% of the individual spending power of an American consumer. Despite their low growth outlook, advanced economies cannot be ignored, as the spending power of consumers in these regions will remain significantly higher.

Patricia Morton, Lead Economist at The Economist Intelligence Unit, comments: "Given China's and India's economic might, they will take on a much bigger role in addressing global issues such as climate change, international security and global economic governance. In the medium term, this will require the world's existing powers – notably the US – to let India, and especially China, play a greater role on the world stage and adapt international institutions to allow them to exert greater influence."

 

  speech bubble Comments »
 

 

 

24th July 2015

Deep Genomics creates deep learning technology to transform genomic medicine

Deep Genomics, a new technology start-up, was launched this week. The company aims to use deep learning and artificial intelligence to accelerate our understanding of the human genome.

 

deep genomics future timeline
Credit: Hui Y. Xiong et al./Science

 

Evolution has altered the human genome over hundreds of thousands of years – and now humans can do it in a matter of months. Faster than anyone expected, scientists have discovered how to read and write DNA code in a living body, using hand-held genome sequencers and gene-editing systems. But knowing how to write is different from knowing what to write. To diagnose and treat genetic diseases, scientists must predict the biological consequences of both existing mutations and those they plan to introduce.

Deep Genomics, a start-up company spun out of research at the University of Toronto, is on a mission to predict the consequences of genomic changes by developing new deep learning technologies.

“Our vision is to change the course of genomic medicine,” says Brendan Frey, the company’s president and CEO, who is also a professor in the Edward S. Rogers Sr. Department of Electrical & Computer Engineering at the University of Toronto and a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR). “We’re inventing a new generation of deep learning technologies that can tell us what will happen within a cell when DNA is altered by natural mutations, therapies or even by deliberate gene editing.”

Deep Genomics is the only company to combine more than a decade of world-leading expertise in both deep learning and genome biology. “Companies like Google, Facebook and DeepMind have used deep learning to hugely improve image search, speech recognition and text processing. We’re doing something very different. The mission of Deep Genomics is to save lives and improve health,” says Frey. CIFAR Senior Fellow Yann LeCun, the head of Facebook’s Artificial Intelligence lab, is also an advisor to the company.

"Our company, Deep Genomics, will change the course of genomic medicine. CIFAR played a crucial role in establishing the research network that led to our breakthroughs in deep learning and genomic medicine," Frey says.

Deep Genomics is now releasing its first product, called SPIDEX, which provides information about how hundreds of millions of DNA mutations may alter splicing in the cell, a process that is crucial for normal development. Because errant splicing is behind many diseases and disorders, including cancers and autism spectrum disorder, SPIDEX has immediate and practical importance for genetic testing and pharmaceutical development. The science validating the SPIDEX tool was described earlier this year in the journal Science.

“The genome contains a catalogue of genetic variation that is our DNA blueprint for health and disease,” says CIFAR Senior Fellow Stephen Scherer, director of the Centre for Applied Genomics at SickKids and the McLaughlin Centre at the University of Toronto, and an advisor to Deep Genomics. “Brendan has put together a fantastic team of experts in artificial intelligence and genome biology – if anybody can decode this blueprint and harness it to take us into a new era of genomic medicine, they can.”

Until now, geneticists have spent decades experimentally identifying and examining mutations within specific genes that can be clearly connected to disease, such as the BRCA-1 and BRCA-2 genes for breast cancer. However, the number of mutations that could lead to disease is vast and most have not been observed before, let alone studied.

These mystery mutations pose an enormous challenge for current genomic diagnosis. Labs send the mutations they’ve collected to Deep Genomics, and the company uses their proprietary deep learning system, which includes SPIDEX, to ‘read’ the genome and assess how likely the mutation is to cause a problem. It can also connect the dots between a variant of unknown significance and a variant that has been linked to disease. “Faced with a new mutation that’s never been seen before, our system can determine whether it impacts cellular biochemistry in the same way as some other highly dangerous mutation,” says Frey.

Deep Genomics is committed to supporting publicly funded efforts to improve human health. “Soon after our Science paper was published, medical researchers, diagnosticians and genome biologists asked us to create a database to support academic research,” says Frey. “The first thing we’re doing with the company is releasing this database – that’s very important to us.”

“Soon, you’ll be able to have your genome sequenced cheaply and easily with a device that plugs into your laptop. The technology already exists,” explains Frey. “When genomic data is easily accessible to everyone, the big questions are going to be about interpreting the data and providing people with smart options. That’s where we come in.”

Deep Genomics envisions a future where computers are trusted to predict the outcome of experiments and treatments, long before anyone picks up a test tube. To realise that vision, the company plans to grow its team of data scientists and computational biologists. Deep Genomics will continue to invent new deep learning technologies and work with diagnosticians and biologists to understand the many complex ways that cells interpret DNA, from transcription and splicing to polyadenylation and translation. Building a thorough understanding of these processes has massive implications for genetic testing, pharmaceutical research and development, personalised medicine and improving human longevity.

 

  speech bubble Comments »
 

 

 

24th July 2015

New computer program is first to recognise sketches more accurately than a human

Researchers from Queen Mary University of London (QMUL) have built the first computer program that can recognise hand-drawn sketches better than humans.

 

computer program recognises sketches better than humans

 

Known as Sketch-a-Net, the program is capable of correctly identifying the subject of sketches 74.9 per cent of the time, compared to humans that only managed a success rate of 73.1 per cent. As sketching becomes more relevant with the increase in the use of touchscreens, this development could provide a foundation for new ways to interact with computers.

Touchscreens could understand what you are drawing – enabling you to retrieve a specific image by drawing it with your fingers, which is more natural than keyword searches for finding items such as furniture or fashion accessories. This improvement could also aid police forensics when an artist’s impression of a criminal needs to be matched to a mugshot or CCTV database.

The research, which was accepted at the British Machine Vision Conference, also showed that the program performed better at determining finer details in sketches. For example, it was able to successfully distinguish the specific bird variants ‘seagull’, ‘flying-bird’, ‘standing-bird’ and ‘pigeon’ with 42.5 per cent accuracy compared to humans that only achieved 24.8 per cent.

 

computer program recognises sketches better than humans

 

Sketches are very intuitive to humans and have been used as a communication tool for thousands of years, but recognising free-hand sketches is challenging because they are abstract, varied and consist of black and white lines rather than coloured pixels like a photo. Solving sketch recognition will lead to a greater scientific understanding of visual perception.

Sketch-a-Net is a ‘deep neural network’ – a type of computer program designed to emulate the processing of the human brain. It is particularly successful because it accommodates the unique characteristics of sketches, particularly the order the strokes were drawn. This was information that was previously ignored but is especially important for understanding drawings on touchscreens.

 

computer program recognises sketches better than humans

 

Timothy Hospedales, co-author of the study and Lecturer in the School of Electronic Engineering and Computer Science, QMUL, said: “It’s exciting that our computer program can solve the task even better than humans can. Sketches are an interesting area to study because they have been used since pre-historic times for communication and now, with the increase in use of touchscreens, they are becoming a much more common communication tool again. This could really have a huge impact for areas such as police forensics, touchscreen use and image retrieval, and ultimately will help us get to the bottom of visual understanding.”

The paper, 'Sketch-a-Net that Beats Humans' by Q. Yu, Y. Yang, Y. Song, T. Xiang and T. Hospedales, will be presented at the 26th British Machine Vision Conference on Tuesday 8th September 2015.

 

computer program recognises sketches better than humans

 

  speech bubble Comments »
 

 

 

23rd July 2015

NASA finds the most Earth-like planet yet

NASA has announced the discovery of Kepler-452b, an exoplanet that is near-Earth-size and orbiting the habitable zone of a Sun-like star.

 

kepler 452b earth like planet
This artist's impression compares Earth (left) to the new planet, Kepler-452b, which is about 60 percent larger in diameter (right). Credits: NASA/JPL-Caltech/T. Pyle

 

NASA's Kepler mission has confirmed the first near-Earth-size planet in the “habitable zone” around a sun-like star. This discovery and the introduction of 11 other new small habitable zone candidate planets mark another milestone on the journey to finding “Earth 2.0.” 

The newly discovered Kepler-452b is the smallest planet to date found inside the habitable zone of a G-type star, like our sun. Today's confirmation of Kepler-452b brings the total number of confirmed exoplanets to 1,030.

 

kepler 452b earth like planet

 

“On the 20th anniversary year of the discovery that proved other suns host planets, the Kepler exoplanet explorer has discovered a planet and star which most closely resemble the Earth and our Sun,” said John Grunsfeld, associate administrator of NASA’s Science Mission Directorate at the agency’s headquarters in Washington. “This exciting result brings us one step closer to finding an Earth 2.0.”

Kepler-452b is 60 percent larger in diameter than Earth and is considered a super-Earth-size planet. While its mass and composition are not yet determined, previous research suggests that planets of this size have a good chance of being rocky.

While Kepler-452b is somewhat larger than Earth, its orbit is remarkably similar, being only 5 percent longer at 385 days. The planet is just 5 percent farther from its parent star than Earth is from our Sun. The star Kepler-452 has the same temperature as our own sun, is around 20 percent brighter and with a diameter 10 percent larger. Its age is estimated at 6 billion years, which is 1.5 billion years older than our sun.

“We can think of Kepler-452b as an older, bigger cousin to Earth, providing an opportunity to understand and reflect upon Earth’s evolving environment,” said Jon Jenkins, data analysis lead at NASA's Ames Research Centre, California. “It’s awe-inspiring to consider that this planet has spent 6 billion years in the habitable zone of its star; longer than Earth. That’s substantial opportunity for life to arise, should all the necessary ingredients and conditions for life exist on this planet.”

 

kepler 452b earth like planet

 

In addition to confirming Kepler-452b, the team has increased the number of new exoplanet candidates by 521 from their analysis of observations conducted from May 2009 to May 2013, raising the number of planet candidates detected by the Kepler mission to 4,696. Candidates require follow-up observations and analysis to verify they are actual planets.

Twelve of the new planet candidates have diameters between one to two times that of Earth, and orbit in their star's habitable zone. Of these, nine orbit stars that are similar to our sun in size and temperature.

 

kepler 452b earth like planet

 

“We've been able to fully automate our process of identifying planet candidates, which means we can finally assess every transit signal in the entire Kepler dataset quickly and uniformly,” said Jeff Coughlin, Kepler scientist at the SETI Institute in Mountain View, California, who led the analysis of a new candidate catalogue. “This gives astronomers a statistically sound population of planet candidates to accurately determine the number of small, possibly rocky planets like Earth in our Milky Way galaxy.”

The Kepler-452 system is located 1,400 light-years away in the constellation Cygnus. At the speed of New Horizons, it would take about 25.8 million years to reach there. A research paper reporting NASA's findings has been accepted for publication in The Astronomical Journal.

 

  speech bubble Comments »
 

 

 

23rd July 2015

World's first bionic eye implant for a patient with macular degeneration

U.S. firm Second Sight has announced that the first age-related macular degeneration patient has received its Argus II bionic eye at Manchester Royal Eye Hospital in the UK, as part of a ground-breaking study.

 

argus ii bionic eye system

 

Second Sight – a developer of visual prosthetics – yesterday announced the first implant and successful activation of the Argus II Retinal Prosthesis System (Argus II) in a dry age-related macular degeneration (AMD) patient. Ray Flynn, 80, who has total loss of his central vision, can now make out the direction of white lines on a computer screen using the retinal implant. In an interview with the BBC, Mr Flynn said he was “delighted” with the implant and hoped in time it would improve his vision sufficiently to help him with day-to-day tasks like gardening and shopping.

The implant is part of a feasibility study aiming to evaluate the safety and utility of the Argus II System in individuals with late-stage Dry AMD, a condition that severely affects central vision. The implant was performed at the Manchester Royal Eye Hospital in the United Kingdom by Dr. Paulo Stanga MD, Consultant Ophthalmologist & Vitreoretinal Surgeon. The device was activated approximately two weeks after implantation, and initial reports confirm that Flynn is receiving some useful vision. The Argus II has already been tested and approved in the United States and Europe for individuals with Retinitis Pigmentosa (RP) and Outer Retinal Degeneration, respectively.

“The difference between RP and Dry AMD is that RP primarily affects the peripheral vision, whereas AMD primarily affects the central vision. Retinal implants for individuals with AMD may restore some useful vision in their central visual field, which is non-functional due to degeneration of the photoreceptors. The goal in restoring this central vision is to provide individuals with AMD more natural vision and ultimately improve their independence and quality of life," says Dr. Stanga. “This is totally ground-breaking research, where positive results from the study could provide advanced Dry AMD patients with a new alternative treatment.”

The Argus II works by using a video camera mounted on sunglasses worn by the patient. This transmits images to a chip inside the eye, which shares the signals with an array of 60 electrodes (in a 6 × 10 grid). These electrodes convey electric fields to neural impulses, which are sent to the brain and interpreted as vision, restoring the ability to discern light, movement and shapes.

 

argus ii bionic eye

 

Eligibility for this study includes patients 25 to 85 years of age with advanced dry AMD, some residual light perception, and a previous history of useful form vision. Study subjects will be followed for three years to evaluate safety and utility of the Argus II system on visual function. Pending positive study results, the company plans to conduct a larger study to support market approvals. It is estimated that two million individuals worldwide are legally blind due to AMD and 375,000 people are blinded by RP.

Second Sight Chief Executive Officer, Dr. Robert Greenberg, comments: “We are very excited to begin such an important study for this patient population and to have the opportunity to help a great deal more people living with blindness. Though it is obviously still early in this clinical trial, we are very encouraged by these initial results.”

The launch of this study is another step toward Second Sight’s mission to enable blind people to achieve greater independence. Earlier this year, the first Orion I Visual Cortical Prostheses were implanted in animals to evaluate fit, form, stability, and biocompatibility. Human trials for the Orion I are planned to commence by Q1 2017. If successful, the Orion I has the potential to address nearly all forms of blindness.

 

  speech bubble Comments »
 

 

 

23rd July 2015

A modular vertical city concept

Luca Curci architects studio have presented their "Vertical City" concept, a project proposal for a modular city-building settled in the water.

 

vertical city building future timeline

 

Italian architecture studio, Luca Curci, has presented "Vertical City" – a project proposal for a vertical city-building settled in the water. The project combines sustainability with population density and aims to be a "zero-energy city-building".

The architects explain how they analysed the contemporary skyscraper, and re-interpreted it as an opened structure, with green areas on each level and more natural light and ventilation. This new interpretation would allow residents to enjoy a healthier lifestyle, in connection with natural elements and improving their local community.

The building's design is based on a modular structural prefabricated element, which is repeatable horizontally as well as vertically. The singular shape of this element creates a 3-D network which sustains every single floor. The structure is surrounded by a membrane of photovoltaic glasses which provide electricity to the whole building and make it energy independent, with any excess solar energy able to be exported to the mainland.

The city-building is completely perforated to permit the circulation of air and light on each level, hosting green areas and vertical gardens. Green zones are spread all over the tower, while meeting and social areas can enhance community life.

The city-building consists of 10 modular layers overlapping. It reaches the height of 750 metres (2,460 ft). With a total volume of 3.8 million cubic metres it can host up to 25,000 people, with green areas encompassing 200,000 square metres, including the public garden square at the top of the building. Each modular and repeatable layer has a diameter of 155 metres (508 ft). The tower has 18 floors, with a mixture of homes, commercial services and other facilities for a large community. Residences have different sizes and shapes for each floor, and they include apartments, duplex and villas.

The building is settled on the sea bottom, with a series of underwater floors that host parking and technical areas, facilities such as spas, mediation centres and gym and luxury hotels rooms with underwater views.

It is possible to reach the Vertical City by water, by land or by air. The circular basement is equipped with external and internal docks and three naval entries: large boats can dock at the external berths, permitting public or private smaller boats only, to navigate in the inner gulf. A connection with the mainland is made possible through a semi-submersed bridge for pedestrians, cars and public electric transports, which connect the land with the basement underwater. The tower also features a heliport connected with the upper garden-square and vertical linking-installations.

The architects conclude that Vertical City is "a modular interpretation of the contemporary city – and possible future."

 

  speech bubble Comments »
 

 

 

22nd July 2015

Global warming update – July 2015

The latest global analysis of temperature data from NOAA shows that the first half of 2015 was the hottest such period on record, at 0.85°C (1.53°F) above the 20th century average, surpassing the previous record set in 2010 by 0.09°C (0.16°F).

 

global warming 2015 future timeline

 

The National Oceanic and Atmospheric Administration (NOAA) has released its latest Global Analysis of temperature and climate data. In addition to the warmest six months on record, a number of records were broken for individual months in the first half of 2015 – the Earth experienced its hottest ever February, March, May and June. These warm months, combined with the previous six months, make July 2014 to June 2015 the warmest 12-month period since records began 136 years ago.

Large areas of Earth's land surfaces witnessed higher than average temperatures in June. There was record warmth across the western United States, parts of northern South America, several regions in central to western Africa, central Asia around and to the east of the Caspian Sea, and parts of southeastern Asia. Western Greenland and parts of India and China were cooler than average, and northern Pakistan was much cooler than average.

For the oceans, the June global sea surface temperature was 0.74°C (1.33°F) above the 20th century average of 16.4°C (61.5°F), the highest for June on record, surpassing the previous record set last year by 0.06°C (0.11°F). This also tied with September 2014 as the highest monthly departure from average for any month for the globally-averaged sea surface temperature. Record warmth was observed across the northeastern and equatorial Pacific as well as parts of the equatorial and southern Indian Ocean, various regions of both the North and South Atlantic Ocean, and the Barents Sea to the northeast of Scandinavia. Only part of the North Atlantic between Greenland and the United Kingdom was much cooler than average. 2015 looks set to become the hottest year ever, thanks to the ongoing El Niño, which is clearly seen in the image below and has a strong (80%) chance of persisting into early spring 2016. For comparison, the famous "super El Niño" of 1997-1998 is shown on the left.

 

global warming 2015 future timeline

 

  speech bubble Comments »
 

 

 

21st July 2015

Cost of human missions to the Moon and Mars could be shrunk by a factor of ten

Through private and international partnerships, the cost of colonising other worlds could be reduced by 90 percent, according to a joint study released by the National Space Society and the Space Frontier Foundation and reviewed by an independent team of NASA experts.

 

mars human mission 2030s future timeline

 

The National Space Society (NSS) and Space Frontier Foundation (SFF) have announced their support for NASA’s funding of the newly released NexGen Space study, illustrating how to cut the cost of human space exploration by a factor of 10. The study, “Economic Assessment and Systems Analysis of an Evolvable Lunar Architecture that Leverages Commercial Space Capabilities and Public – Private – Partnerships”, finds that public-private partnerships could return humans to the Moon for approximately 90% less than the previously estimated $100 billion, allowing the United States to ensure national security in a new space age.

“The Space Frontier Foundation supports and recommends public-private partnerships in all proposed human spaceflight programs in order to reduce costs and enable these missions that were previously unaffordable,” said the Space Frontier Foundation’s Chairman of the Board, Jeff Feige. “This is the way that America will settle the final frontier, save taxpayers money and usher in a new era of economic growth and STEM innovation.”

 

moon base 2030s future timeline

 

NSS and SFF call attention to these conclusions from the study:

• Through public-private partnerships, NASA could return humans to the surface of the Moon and develop a permanent lunar base with its current human spaceflight budget.

• Mining fuel from lunar poles and transporting it to lunar orbit for use by other spacecraft reduces the cost of sending humans to Mars and other locations beyond low Earth orbit. These commercial fuel depots in lunar orbit have the potential to cut the cost of sending humans to Mars by more than $10 billion per year.

“NSS congratulates NASA for funding the team at NexGen that discovered how such cost reductions are possible,” said Mark Hopkins, the NSS Executive Committee Chair. “A factor of ten reduction in cost changes everything.”

Recent contracts with Boeing and SpaceX are just one example of how partnerships can work and may help with more ambitious projects in the future. The latter spent only $440 million developing its Falcon 9 rocket and Dragon crew capsule, where NASA would have spent $4 billion. SpaceX has also been developing a reusable rocket that aims to dramatically cut launch costs. Extracting and refining resources on the Moon, rather than having them delivered up from Earth to the lunar surface, could save a great deal of money too. There are many other examples of cost-saving measures. Click here to read the executive summary and here to download the full report.

 

  speech bubble Comments »
 

 

 

21st July 2015

New promotional video of the TF-X flying car

U.S. aircraft designer Terrafugia has just announced the premier of the new Outer Mould Line (an aeroshell's outer surface) for the TF-X – a four-seat, vertical takeoff and landing (VTOL) hybrid electric aircraft that can be driven on roads and highways, in addition to flying. It features retractable wings, the ability to land and take off within a 100 ft diameter zone, a flight speed of 200 mph (322 km/h), a flight range of 500 miles (800 km), and a backup full-vehicle parachute system for safety. When fully developed, Terrafugia claims the vehicle will be statistically safer than driving a modern automobile, and will automatically avoid other air traffic, bad weather, and restricted and tower-controlled airspace.

A one-tenth scale wind tunnel test model of the TF-X has been successfully developed based on the new Outer Mould Line and is currently on display at EAA’s AirVenture in Oshkosh, Wisconsin. The model will be tested at the MIT Wright Brothers wind tunnel, the same tunnel that was used to test models of Terrafugia’s Transition – a similar vehicle it has been developing alongside the TF-X. The wind tunnel test model will be used to measure drag, lift and thrust forces while simulating hovering flight, transitioning to forward flight and full forward flight.

The Transition, originally planned for launch in 2013, will now debut in either late 2015 or 2016, while the TF-X seen below is expected to go on sale during the mid-2020s. There's no word yet on pricing details for the TF-X, though it's likely to be aimed at wealthy individuals, given that its brother the Transition has a price tag of US$280K. Over the coming decades, however, as costs and technology improve, the dream of a practical flying car may become a reality for everyone. Then we will truly be living in the future.

 

 

 

  speech bubble Comments »
 

 

 

20th July 2015

Nanowires boost solar fuel cell efficiency tenfold

Nanowires have been used by Dutch researchers to boost solar fuel cell efficiency tenfold, while using 10,000 times less precious material.

 

nanowires solar fuel cell hydrogen technology 2015 nanotechnology nanotech breakthrough future timeline

 

Researchers at Eindhoven University of Technology (EUT) and the Foundation for Fundamental Research on Matter (FOM) in the Netherlands have demonstrated a highly promising prototype of a solar cell that generates fuel, rather than electricity. The material gallium phosphide enables their cell to produce the clean fuel hydrogen gas from liquid water. By processing the gallium phosphide using tiny nanowires, the yield is boosted by a factor of ten, while using 10,000 times less precious material.

Electricity produced by a solar cell can be used to set off chemical reactions. If this generates a fuel, then one speaks of solar fuels – a hugely promising replacement for polluting fuels. One possibility is to split liquid water using the electricity that is generated (electrolysis). Among oxygen, this produces hydrogen gas that can be used as a clean fuel in the chemical industry or combusted in fuel cells – in cars for example – to drive engines.

To connect an existing silicon solar cell to a battery that splits the water may well be an efficient solution now, but is very expensive. Many researchers are therefore trying to develop a semiconductor material able to both convert sunlight to an electrical charge and split the water, all in one; a kind of "solar fuel cell". Researchers at EUT and FOM see their dream candidate in gallium phosphide (GaP), a compound of gallium and phosphorus that also serves as the basis for specific coloured LEDs.

 

gallium phosphide GaP chemical structure

 

GaP has good electrical properties, but it cannot easily absorb light when it consists of a large flat surface, as used for solar cells. The researchers overcame this problem by making a grid of tiny GaP nanowires, measuring 500 nanometres (a millionth of a millimetre) in length and just 90 nanometres thick. This design immediately boosted the yield of hydrogen to 2.9 percent – a factor of ten improvement and a record for GaP cells, even though still some way off the 15 percent achieved by silicon cells coupled to a battery.

Research leader and EUT professor Erik Bakkers said it’s not simply about the yield – where there is still a lot of scope for improvement he points out: “For the nanowires we needed 10,000 less precious GaP material than in cells with a flat surface. That makes these kinds of cells potentially a great deal cheaper. In addition, GaP is also able to extract oxygen from the water – so you then actually have a fuel cell in which you can temporarily store your solar energy. In short, for a solar fuels future we cannot ignore gallium phosphide any longer.”

The researchers describe their breakthrough in the journal Nature Communications.

 

  speech bubble Comments »
 

 

 

20th July 2015

New massless particle is observed for the first time

Scientists report the discovery of the Weyl fermion after an 85-year search. This massless quasiparticle could lead to future electronics that are faster and with less waste heat.

 

weyl fermion massless particle 2015 science

 

An international team led by Princeton University scientists has discovered an elusive massless particle, first theorised 85 years ago. This particle is known as the Weyl fermion, and could give rise to faster and more efficient electronics, because of its unusual ability to behave as both matter and antimatter inside a crystal. Weyl fermions, if applied to next-generation electronics, could allow a nearly free and efficient flow of electricity in electronics – and thus greater power – especially for computers. The researchers report their discovery in the journal Science.

Proposed by the mathematician and physicist Hermann Weyl in 1929, Weyl fermions have been long sought by scientists, because they are regarded as possible building blocks of other subatomic particles, and are even more basic than electrons. Their basic nature means that Weyl fermions could provide a much more stable and efficient transport of particles than electrons, the main particle behind modern electronics. Unlike electrons, Weyl fermions are massless and possess a high degree of mobility.

"The physics of the Weyl fermion are so strange – there could be many things that arise from this particle that we're just not capable of imagining now," explained Professor M. Zahid Hasan, who led the team.

The researchers' find differs from other particle discoveries, in that the Weyl fermion can be reproduced and potentially applied. Particles such as the Higgs boson are typically detected in the fleeting aftermath of collisions. The Weyl fermion, however, was captured inside a specially designed synthetic metallic crystal called tantalum arsenide.

 

weyl fermion massless particle 2015 science
Professor M. Zahid Hasan

 

The Weyl fermion has two characteristics that could improve future electronics, possibly helping to continue the exponential growth in computer power, while also proving useful in developing efficient quantum computing. Firstly, they behave like a composite of monopole- and antimonopole-like particles inside a crystal. This means that Weyl particles that have opposite, magnetic-like charges, can nonetheless move independently of each other with a high degree of mobility. Secondly, Weyl fermions can be used to create massless electrons that move very quickly with no backscattering. In electronics, backscattering hinders efficiency and generates heat. While normal electrons are lost when they collide with an obstruction, Weyl electrons simply move through and around roadblocks.

"It's like they have their own GPS and steer themselves without scattering," said Hasan. "They will move and move only in one direction since they are either right-handed or left-handed and never come to an end because they just tunnel through. These are very fast electrons that behave like unidirectional light beams and can be used for new types of quantum computing."

Hasan and his group researched and simulated dozens of crystal structures before finding the one suitable for holding Weyl fermions. Once fashioned, the crystals were loaded into a scanning tunnelling spectromicroscope (pictured above) and cooled to near absolute zero. Crystals passing the spectromicroscope test were taken to the Lawrence Berkeley National Laboratory in California, for testing with high-energy photon beams. Once fired through the crystal, the beams' shape, size and direction indicated the presence of the long-elusive Weyl fermion.

The hunt for the Weyl fermion began in the earliest days of quantum theory, when physicists first realised that their equations implied the existence of antimatter counterparts to electrons and other commonly known particles.

"People figured that although Weyl's theory was not applicable to relativity or neutrinos, it is the most basic form of fermion and had all other kinds of weird and beautiful properties that could be useful," said Hasan.

"After more than 80 years, we found that this fermion was already there, waiting. It is the most basic building block of all electrons," he said. "It is exciting that we could finally make it come out following Weyl's 1929 theoretical recipe."

 

  speech bubble Comments »
 

 

 

16th July 2015

Asteroid mining test craft is successfully deployed

Planetary Resources, Inc., the asteroid mining company, announced today that its Arkyd 3 Reflight (A3R) spacecraft was deployed successfully from the International Space Station's (ISS) Kibo airlock and has begun its 90-day mission.

 

arkyd-3 planetary resources 2015

 

The A3R demonstration vehicle will validate several core technologies including the avionics, control systems and software, which the company will incorporate into future spacecraft that will venture into the Solar System and prospect for resource-rich, near-Earth asteroids.

The A3R was launched to the ISS onboard a SpaceX Falcon 9 rocket in April as part of the CRS-6 crew resupply mission. "Our philosophy is to test often, and if possible, to test in space. The A3R is the most sophisticated, yet cost-effective, test demonstration spacecraft ever built. We are innovating on every level from design to launch," said Chris Lewicki, president and chief engineer, Planetary Resources, Inc. "By vertically integrating the system at our facility in Redmond, we are in constant control of every component, including the ones we purchase off the shelf and the others that we manufacture using 3D printers."

 

arkyd-3 planetary resources 2015

 

Peter H. Diamandis, M.D., co-founder and co-chairman, Planetary Resources, Inc., stated, "The successful deployment of the A3R is a significant milestone for Planetary Resources as we forge a path toward prospecting resource-rich asteroids. Our team is developing the technology that will enable humanity to create an off-planet economy that will fundamentally change the way we live on Earth."

Once the A3R completes its mission, the validated and evolved technologies will be the main components of the Arkyd series of deep-space asteroid-prospecting spacecraft. The next demonstrator, the Arkyd-6 (pictured below), will be launched later this year and will test the attitude control, power, communication and avionics systems.

 

arkyd-6 planetary resources 2015

 

Planetary Resources is leveraging the increased payload capacity of the A6 to begin demonstration of core technology to measure resources on water-rich asteroids. Included in the payload is a mid-wave infrared imaging system, able to precisely measure temperature differences of the objects it observes, as well as acquire key data related to the presence of water and water-bearing minerals. The system will first test targeted areas of our own planet before being deployed to near-Earth asteroids on future missions.

Eric Anderson, co-founder and co-chairman, Planetary Resources, Inc., said, "This key technology for determining resources on asteroids can also be applied towards monitoring and managing high-value resources on our home planet. All of our work at Planetary Resources is laying the foundation to better manage and increase humanity's access to natural resources on our planet and in our Solar System."

In related news, the SPACE Act of 2015 was recently passed in the House of Representatives. As Peter explains in the video below, this recognises the rights of U.S. asteroid mining companies to declare mined asteroid resources as property and creates a process for resolving disputes. The Senate is currently reviewing a duplicate version of the House language, S. 976.

 

 

 

  speech bubble Comments »
 

 

 

15th July 2015

Large Hadron Collider discovers new particle

After a 50 year hunt, scientists have reported strong evidence of a new particle – the pentaquark.

 

large hadron collider pentaquark 2015 science

 

The LHCb experiment at CERN's Large Hadron Collider (LHC) has reported the discovery of a class of particles known as pentaquarks. The team has submitted a paper reporting these findings to the journal Physical Review Letters.

"The pentaquark is not just any new particle," said LHCb spokesperson Guy Wilkinson. "It represents a way to aggregate quarks – namely the fundamental constituents of ordinary protons and neutrons in a pattern that has never been observed before in over 50 years of experimental searches. Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we're all made, is constituted."

Our understanding of the structure of matter was revolutionised in 1964 when American physicist, Murray Gell-Mann, proposed that a category of particles known as baryons, which includes protons and neutrons, are comprised of three fractionally charged objects called quarks, and that another category, mesons, are formed of quark-antiquark pairs. Gell-Mann was awarded the Nobel Prize in physics for this work in 1969. This quark model also allows the existence of other quark groups, such as pentaquarks – composed of four quarks and an antiquark. Until now, however, no conclusive evidence for pentaquarks had been seen.

"Benefitting from the large data set provided by the LHC, and the excellent precision of our detector, we have examined all possibilities for these signals, and conclude that they can only be explained by pentaquark states", says LHCb physicist Tomasz Skwarnicki of Syracuse University. "More precisely the states must be formed of two up quarks, one down quark, one charm quark and one anti-charm quark."

 

2015 pentaquark data

 

LHCb researchers looked for pentaquark states by examining the decay of a baryon known as Λb (Lambda b) into three other particles, a J/ψ- (J-psi), a proton and a charged kaon. Earlier experiments that have searched for pentaquarks have proved inconclusive. Where the LHCb experiment differs is that it has been able to look for pentaquarks from many perspectives, with all pointing to the same conclusion. It's as if the previous searches were looking for silhouettes in the dark, whereas LHCb conducted the search with the lights on, and from all angles. The next step in the analysis will be to study how the quarks are bound together within the pentaquarks.

"The quarks could be tightly bound," said LHCb physicist Liming Zhang of Tsinghua University, "or they could be loosely bound in a sort of meson-baryon molecule, in which the meson and baryon feel a residual strong force similar to the one binding protons and neutrons to form nuclei."

More studies will be needed to distinguish between these possibilities, and to see what else pentaquarks can teach us. The new data that LHCb will collect in LHC run 2 will allow progress to be made on these questions.

 

  speech bubble Comments »
 

 

 

14th July 2015

China maintains supercomputing lead

For the fifth consecutive time, Tianhe-2, a supercomputer developed by China's National University of Defence Technology, has retained its position as the world's no. 1 system, according to the 45th edition of the twice-yearly TOP500 list.

 

tianhe-2 supercomputer future timeline technology 2015

 

Tianhe-2, which means "Milky Way-2", continues to lead the TOP500 list with a performance of 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark.

In second place is Titan, a Cray XK7 system at the Department of Energy's (DOE) Oak Ridge National Laboratory. Titan, the top system in the US and one of the most energy-efficient systems on the list, achieved 17.59 petaflop/s on the Linpack benchmark.

The only new entry in the top ten is at no. 7 – Shaheen II is a Cray XC40 system installed at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Shaheen II achieved 5.54 petaflop/s on the Linpack benchmark, making it the highest-ranked Middle East system in the 22-year history of the list and the first to crack the top ten.

There are 68 systems with performance greater than 1 petaflop/s on the list, up from 50 last November. In total, the combined performance of all 500 systems has grown to 363 petaflop/s, compared to 309 petaflop/s last November and 274 petaflop/s one year ago. HP has the lead in the total number of systems with 178 (35.6%), compared to IBM with 111 systems (22.2%).

Nine systems in the top ten were all installed in 2011 or 2012, and this low level of turnover among the top supercomputers reflects a slowing trend that began in 2008. However, new systems are in the pipeline that may reignite the pace of development and get performance improvements back on track. For example, Oak Ridge National Laboratory is building the IBM/Nvidia "Summit", featuring up to 300 petaflops – an order of magnitude faster than China's Tianhe-2 – that is planned for 2018. Meanwhile, British company Optalysys claims it will have a multi-exaflop optical computer by 2020.

To view the complete list, visit top500.org.

 

top 500 supercomputer list technology timeline 2015

 

  speech bubble Comments »
 

 

 

13th July 2015

7 nanometre chips enable Moore's Law to continue

Researchers have announced a breakthrough in the manufacture of 7 nanometre (nm) computer chips, enabling the trend of Moore's Law to continue for the next few years.

 

7nm nanometer chip future technology timeline 2015

 

IBM Research has announced the semiconductor industry's first 7nm (nanometre) node test chips with functioning transistors. The breakthrough was accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute's Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE) and could result in the ability to place more than 20 billion tiny switches – transistors – on the fingernail-sized chips that power everything from smartphones to spacecraft.

To achieve the higher performance, lower power and scaling benefits promised by 7nm technology, researchers had to bypass conventional semiconductor manufacturing approaches. Among the novel processes and techniques pioneered in this collaboration were a number of industry-first innovations, most notably Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.

Industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging "exponential" technologies. This accomplishment was part of IBM's $3 billion, five-year investment in chip R&D announced last year.

 

7nm nanometer chip future technology timeline 2015

 

"For business and society to get the most out of tomorrow's computers and devices, scaling to 7nm and beyond is essential," said Arvind Krishna, senior vice president and director of IBM Research. "That's why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology. Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come."

Microprocessors utilising 22nm and 14nm technology power today's servers, cloud data centres and mobile devices, and 10nm technology is well on the way to becoming a mature technology. The IBM Research-led alliance achieved close to 50 percent area scaling improvements over today's most advanced technology, introduced SiGe channel material for transistor performance enhancement at 7nm node geometries, process innovations to stack them below 30nm pitch and full integration of EUV lithography at multiple levels. These techniques and scaling could result in at least a 50 percent power/performance improvement for next generation systems that will power the Big Data, cloud and mobile era. These new 7nm chips are expected to start appearing in computers and other gadgets in 2017-18.

 

7nm nanometer chip future technology timeline 2015

 

  speech bubble Comments »
 

 

 

8th July 2015

The world's first 2TB consumer SSDs

Samsung has announced the first 2 terabyte solid state drives for the consumer market – continuing the exponential trend in data storage.

 

samsung 2tb ssd

 

Samsung has announced two new SSDs – the 850 Pro and 850 EVO – both offering double the capacity of the previous generation. The 2.5" form factor drives can greatly boost performance for desktops and laptops. They will be especially useful in the accessing and storage of 4K video, which can often require enormous file sizes. The available capacities include 120GB, 250GB, 500GB, and 1TB, all the way up to 2TB.

The 850 Pro is designed for power users needing the maximum possible speed, while the 850 EVO is less powerful but somewhat cheaper. The 850 Pro features up to 550MBps sequential read and 520MBps sequential write rates and 100,000 random I/Os per second (IOPS). The 850 EVO has 540MBps sequential read and 520MBps write rates, with up to 90,000 random IOPS. Both models feature 3D V-NAND technology, which stacks 32 layers of transistors on top of each other. The drives also use multi-level cell (MLC) and triple-level cell (TLC) (2- and 3-bit per cell) technology for even greater memory density.

Until recently, consumers were forced to choose between speed or size when it came to upgrading their hard drives. For pure speed, a solid state drive was the best option, while larger sizes were typically catered for with slower and clunkier spinning drives. These new terabyte-scale SSDs are going to change that – combining both high speed and high capacity. Price may still be an issue, as Samsung's new product line doesn't come cheap. The 2TB version of the 850 Pro will retail for $999.99 and the 850 EVO is $799.99. However, given the trend in price performance witnessed in earlier generations of data storage, it is likely these high capacity SSDs will soon be a lot cheaper.

"Samsung experienced a surge in demand for 500 gigabyte (GB) and higher capacity SSDs with the introduction of our V-NAND SSDs," says Un-Soo Kim, Senior Vice President of Branded Product Marketing, Memory Business, in a press release from Samsung. "The release of the 2TB SSD is a strong driver into the era of multi-terabyte SSD solutions. We will continue to expand our ultra-high performance and large density SSD product portfolio and provide a new computing experience to users around the globe."

 

samsung 2tb ssd solid state drive technology packaging

 

  speech bubble Comments »
 

 

 

4th July 2015

The first comprehensive analysis of the woolly mammoth genome

The first comprehensive analysis of the mammoth genome has been completed – revealing a number of traits that enabled the animals to survive the Arctic cold.

 

woolly mammoths 2015 research
CREDIT: IMAGE COURTESY OF GIANT SCREEN FILMS © 2012 D3D ICE AGE, LLC

 

2015 is turning out to be a significant year for research on mammoths. In March, DNA from an ancient specimen was spliced into that of an elephant and shown to be functional for the first time. In April, a team sequenced the entire genome of the extinct animal. Following those breakthroughs, it is now reported that scientists have completed the first detailed analysis of the genome, revealing extensive genetic changes that helped mammoths adapt to life during the Ice Age.

The research was published this week in the peer-reviewed journal Cell Reports. It concludes that mammoths possessed genes with striking differences to those found in elephants. These genes played roles in skin and hair development, fat metabolism, insulin signalling and numerous other traits for adaptation in extreme cold environments. Genes linked to physical traits such as skull shape, small ears and short tails were also identified. As a test of their function, a mammoth gene involved in temperature sensation was "resurrected" in the laboratory and its protein product characterised.

“This is by far the most comprehensive study to look at the genetic changes that make a woolly mammoth a woolly mammoth,” says Vincent Lynch, PhD, assistant professor of human genetics at the University of Chicago. “They are an excellent model to understand how morphological evolution works, because mammoths are so closely related to living elephants, which have none of the traits they had.”

Well-studied due to the abundance of skeletons, frozen carcasses and depictions in prehistoric art, these animals possessed long, coarse fur, a thick layer of subcutaneous fat, small ears and tails and a brown-fat deposit behind the neck which may have functioned similar to a camel hump. They last roamed the frigid tundra steppes of northern Asia, Europe and North America roughly 10,000 years ago.

 

last glacial maximum earth ice
Artist's impression of the northern hemisphere during the last Ice Age. By Ittiz (Own work) [CC BY-SA 3.0], via Wikimedia Commons.

 

Previous efforts to sequence preserved mammoth DNA were error-prone, or yielded insights into only a limited number of genes. Lynch and his team performed deep sequencing of two specimens to identify 1.4 million genetic variants unique to woolly mammoths. These are now known to have caused changes to the proteins produced by around 1,600 genes.

Of particular interest was a group of genes responsible for temperature sensation, which also play roles in hair growth and fat storage. The team used ancestral reconstruction techniques to “resurrect” the mammoth version of one of these genes, TRPV3. When transplanted into human cells in the lab, the mammoth TRPV3 gene produced a protein that was less responsive to heat than an ancestral elephant version of the gene. This result is supported by experiments with TRPV3 on mice, which prefer colder environments and have wavier hair than normal mice.

However, although the functions of these genes match well with the environment in which woolly mammoths were known to live, Lynch warns that it is not direct proof of their effects in live mammoths. Regulation of gene expression, for example, is extremely difficult to study through the genome alone.

“We can’t know with absolute certainty the effects of these genes unless someone resurrects a complete woolly mammoth, but we can try to infer by doing experiments in the laboratory,” he says. Lynch and his colleagues are now identifying candidates for other mammoth genes to functionally test, alongside planning experiments to study mammoth proteins in elephant cells.

High-quality sequencing and detailed analysis of genomes can serve as a blueprint for efforts to “de-extinct” the woolly mammoth, according to Lynch: “Eventually, we’ll be technically able to do it,” he states. “But the question is: if you’re technically able to do something, should you do it? I personally think no. Mammoths are extinct and the environment in which they lived has changed. There are many animals on the edge of extinction we should be helping instead.”

 

  speech bubble Comments »
 

 

 

1st July 2015

Oregon becomes the fourth US state to make recreational marijuana legal

Oregon has become the fourth state in the US to make recreational marijuana legal. A new voter-approved law – Measure 91 – comes into effect today allowing for adult possession and home cultivation of the drug. The law permits adults 21 and older to grow four plants and keep eight ounces at home, and possess one ounce in public. Public consumption and sales will continue to remain illegal. Taking marijuana across the Oregon border is also illegal.

Retail businesses offering the drug can apply for licenses from 4th January 2016 and are expected to begin operating later that same year. More time was allotted to create specific regulations for sellers to ensure the best possible public safety outcome.

"Expending law enforcement resources by going after nonviolent marijuana users is a shameful waste of time and tax dollars, and a distraction from what's really plaguing neighbourhoods," says Neill Franklin, executive director of Law Enforcement Against Prohibition (LEAP), a criminal justice group opposed to the drug war. "Cops in Oregon can now get into doing their jobs; protecting communities and helping victims of violent crimes get justice."

"Oregon still has more to do to ensure marijuana legalisation is done properly; lawmakers and regulators are currently working to expunge the records of many non-violent marijuana offenders as well as develop proper regulations for taxes, concentrates, and labelling for consumer and child protection," says Inge Fryklund, a former prosecutor, and board member of LEAP. "We must promote honest and accurate public information along with sensible regulations. Oregon can and will be a model for future states looking to consider legalisation in 2016 and beyond."

A total of 23 states and the District of Columbia have now permitted some form of medical marijuana access, while four states – Alaska, Colorado, Oregon and Washington – and the capital Washington, D.C., have legalised it for recreational use. Oregon's regulatory model will be developed with previous successes and failures of other states in mind. Among the priorities of the Oregon Liquor Control Commission are preventing accidental ingestion by children, with the use of appropriate childproof packaging and ensuring that extracts, concentrates, and edibles are carefully regulated, tested, and labelled.

According to state forecasts, Colorado and Washington could generate over $800 million in combined revenue by 2020 from marijuana sales. A clear and growing majority of Americans are in favour of nationwide legalisation of the drug, as evidenced by surveys from Gallup and others. Most of the remaining opposition comes from the conservative baby boomers, a demographic whose influence is beginning to wane. Some of the next states where legalisation may follow include Arizona, California, Maine, Massachusetts and Nevada, with advocates planning for ballot measures in 2016. Similar to the recent decision on same-sex marriage, a nationwide law on marijuana could follow in the not-too-distant future.

A dedicated website for Oregon's new law has been created at whatslegaloregon.com.

 

oregon marijuana legal 1 july 2015

 

 

 
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed