future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
 
       
 
 
 

Blog » Computers & the Internet

 
     
 

21st June 2017

Entangled photons sent between suborbital space and Earth

Chinese scientists report the transmission of entangled photons between suborbital space and Earth, using the satellite Micius. More satellites could follow in the near future, with plans for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.

 

 

In a landmark study, Chinese scientists report the successful transmission of entangled photons between suborbital space and Earth. Furthermore, whereas the previous record for entanglement distance was 100 km (62 miles), here, transmission over more than 1,200 km (746 miles) was achieved.

The distribution of quantum entanglement, especially across vast distances, holds major implications for quantum teleportation and encryption networks. Yet, efforts to entangle quantum particles – essentially "linking" them together over long distances – have been limited to 100 km or less, mainly because the entanglement is lost as they are transmitted along optical fibres, or through open space on land.

One way to overcome this issue is to break the line of transmission into smaller segments and repeatedly swap, purify and store quantum information along the optical fibre. Another approach to achieving global quantum networks is by making use of lasers and satellite technologies. Using a Chinese satellite called Micius, launched last year and equipped with specialised quantum tools, Juan Yin et al. demonstrated the latter feat. The Micius satellite was used to communicate with three ground stations across China, each up to 1,200 km apart.

The separation between the orbiting satellite and these ground stations varied from 500 to 2,000 km. A laser beam on the satellite was subjected to a beam splitter, which gave the beam two distinct polarised states. One of the spilt beams was used for transmission of entangled photons, while the other was used for photon receipt. In this way, entangled photons were received at the separate ground stations.

"It's a huge, major achievement," Thomas Jennewein, physicist at the University of Waterloo in Canada, told Science. "They started with this bold idea and managed to do it."

"The Chinese experiment is quite a remarkable technological achievement," said Artur Ekert, a professor of quantum physics at the University of Oxford, in an interview with Live Science. "When I proposed the entangled-based quantum key distribution back in 1991 when I was a student in Oxford, I did not expect it to be elevated to such heights."

One of the many challenges faced by the team was keeping the beams of photons focused precisely on the ground stations as the satellite hurtled through space at nearly 8 kilometres per second.

Quantum encryption, if successfully developed, could revolutionise communications. Information sent via this method would, in theory, be absolutely secure and practically impossible for hackers to intercept. If two people shared an encrypted quantum message, a third person would be unable to access it without changing the information in an unpredictable way. Further satellite tests are planned by China in the near future, with potential for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

20th June 2017

Graphene transistors could mean computers that are 1,000 times faster

Next-gen, carbon-based transistors would far outperform today's silicon versions, according to a new research paper from the University of Central Florida (UCF).

 

graphene transistors faster
All-carbon spin logic gate. Credit: Nature Communications (2017). DOI: 10.1038/ncomms15635

 

Traditional silicon-based transistors revolutionised electronics with their ability to switch current on and off. By controlling the flow of current, the creation of smaller computers and other devices was possible. Over the decades, rapid gains in miniaturisation led to computers shrinking from room-sized monoliths, to wardrobe-sized, to desktops and laptops and eventually handheld smartphones – a phenomenon known as Moore's Law. In recent years, however, concerns have arisen that the rate of progress may have slowed, or could even be approaching a fundamental limit.

A solution may be on the horizon. This month, researchers have theorised a next-generation transistor based not on silicon but on a ribbon of graphene, a two-dimensional carbon material with the thickness of a single atom. Their findings – reported in Nature Communications – could have big implications for electronics, computing speeds and big data in the future. Graphene-based transistors may someday lead to computers that are 1,000 times faster and use a hundredth of today's power.

"If you want to continue to push technology forward, we need faster computers to be able to run bigger and better simulations for climate science, for space exploration, for Wall Street. To get there, we can't rely on silicon transistors anymore," said Ryan M. Gelfand, director of the NanoBioPhotonics Laboratory at UCF.

 

ryan gelfand UCF
University of Central Florida Assistant Professor Ryan M. Gelfand

 

His team found that by applying a magnetic field to a graphene ribbon, they could change the resistance of current flowing through it. For this device, the magnetic field was controlled by increasing or decreasing the current through adjacent carbon nanotubes. The strength of the magnetic field matched the flow of current through this new kind of transistor, much like a valve controlling the flow of water through a pipe.

Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But clock speeds that rely on silicon transistors have been relatively stagnant for over a decade now, and are mostly still stuck in the 3 to 4 gigahertz range.

A cascading series of graphene transistor-based logic circuits could produce a massive jump, explains Gelfland, with clock speeds approaching the terahertz range – 1,000 times faster – because communication between each of the graphene nanoribbons would occur via electromagnetic waves, instead of the physical movement of electrons. They would also be smaller and far more efficient, allowing device-makers to shrink technology and squeeze in more functionality.

"The concept brings together an assortment of existing nanoscale technologies and combines them in a new way," said Dr. Joseph Friedman, assistant professor of electrical and computer engineering at UT Dallas, who collaborated with Gelfland and his team. While the concept is still in the early stages, Friedman said work towards a prototype all-carbon, cascaded spintronic computing system will continue in the NanoSpinCompute research laboratory.

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

19th June 2017

New VR headset will feature "human eye resolution"

Varjo, a tech startup based in Helsinki, Finland, has today unveiled a new VR/AR technology it has been developing in secret. This features nearly 70 times the pixel count of current generation headsets and is sufficient to match human eye resolution.

 

human eye resolution vr

 

Varjo ("Shadow" in Finnish) Technologies today announced it has emerged from stealth and is now demonstrating the world's first human eye-resolution headmounted display for upcoming Virtual Reality, Augmented Reality and Mixed Reality (VR/AR/MR) products. Designed for professional users and with graphics an order of magnitude beyond any currently shipping or announced head-mounted display, this major advancement will enable unprecedented levels of immersion and realism.

This breakthrough is accomplished by Varjo's patented technology that replicates how the human eye naturally works, creating a super-high-resolution image to the user's gaze direction. This is further combined with video-see-through (VST) technology for unparalleled AR/MR capabilities.

 

human eye resolution vr

 

Codenamed "20|20" after perfect vision, Varjo's prototype is based on unique technology created by a team of optical scientists, creatives and developers who formerly occupied top positions at Microsoft, Nokia, Intel, Nvidia and Rovio. It will be shipping in Varjo-branded products specifically for professional users and applications starting in late Q4, 2017.

"Varjo's patented display innovation pushes VR technology 10 years ahead of the current state-of-the-art, where people can experience unprecedented resolution of VR and AR content limited only by the perception of the human eye itself," said Urho Konttori, CEO and founder. "This technology – along with Varjo VST – jump-starts the immersive computing age overnight: VR is no longer a curiosity, but now can be a professional tool for all industries."

 

 

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

4th June 2017

Intel reveals Core i9 – the next generation of high-end processors

Chipmaker Intel has announced a new generation of processors, including the Core i9 series, its first teraflop desktop CPUs.

 

Intel computer technology future timeline

 

Intel has this week introduced a new family of microprocessors – the Core X-series – which the company describes as the most scalable, accessible and powerful desktop platform ever developed. This includes a new Core i9 brand and Core i9 Extreme Edition, the first consumer desktop CPU with 18 cores and 36 threads. The company is also launching the Intel X299, which adds even more I/O and overclocking capabilities.

Given their extreme power and speed, this family of processors is being pitched at gamers, content creators, and overclocking enthusiasts. Intel expects to increase its presence in high-end desktop markets and believes that customers will pay premiums in exchange for higher performance. Prices for the i9 line-up will range from $999 to $1999.

Prior to this announcement, Intel's high-end desktop processors (known as Broadwell-E) came with six, eight or 10 core options. The Core X-series will include five Core i9 chips, with a minimum of 10 cores and the top-end i9-7980 featuring a massive 18 cores. A major update has also been announced for Intel's Turbo Boost Max Technology 3.0, which will identify the two top cores and direct critical workloads to those, for a big jump on single or multithreaded performance.

The Core i9-7980 will be the first Intel consumer processor to exceed a teraflop of computing power, meaning it can perform a trillion computational operations every second. To put this in perspective, that is equal to the ASCI Red, which reigned as the world's most powerful supercomputer from 1997 until the year 2000. All Core i9 chips will have 3.3GHz base clock speeds, with up to 4.5GHz using Turbo Boost 3.0, and up to 44 PCIe lanes.

"The possibilities with this type of performance are endless," says Gregory Bryant, a senior vice president, in a blog post. "Content creators can have fast image rendering, video encoding, audio production and real-time preview – all running in parallel seamlessly so they spend less time waiting and more time creating. Gamers can play their favourite game, while they also stream, record and encode their gameplay, and share on social media – all while surrounded by multiple screens for a 12K experience with up to four discrete graphics cards."

In addition to Core i9, there are also three new i7 chips and an i5, including the quad-core i5-7640X and i7 models in 4, 6 and 8-core variants. Prices will range from $242 for the i5, to $599 for the i7-7820X.

 

intel computer technology future timeline

 

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

17th May 2017

World's largest single-memory computer is unveiled

Hewlett Packard Enterprise (HPE) has revealed "The Machine" – a new computing architecture with 160 terabytes of memory.

 

hpe the machine
Credit: HPE

 

Hewlett Packard Enterprise (HPE) has introduced the world's largest single-memory computer. Known simply as "The Machine", it is the largest R&D program in the history of the company, and is aimed at delivering a new paradigm called Memory-Driven Computing – an architecture custom-built for the big data era.

"The secrets to the next great scientific breakthrough, industry-changing innovation or life-altering technology hide in plain sight behind the mountains of data we create every day," explained Meg Whitman, CEO of HPE. "To realise this promise, we can't rely on the technologies of the past. We need a computer built for the big data era."

The prototype unveiled this week features a staggering 160 terabytes (TB) of memory, enough to simultaneously work with the data held in every book in the Library of Congress five times over – or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size within a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing.

Based on the current prototype, HPE expects the architecture could easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes. For context, that is 250,000 times the entire digital universe today.

With such a vast amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google's autonomous vehicles and every data set from space exploration, all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds.

 

hpe the machine memory driven computing future technology timeline

 

"We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society," said Mark Potter, CTO at HPE and director, Hewlett Packard Labs. "The architecture we have unveiled can be applied to every computing category from intelligent edge devices to supercomputers."

Memory-Driven Computing puts memory, not the processor, at the centre of the computing architecture. By eliminating the inefficiencies of how memory, storage and processors interact in traditional systems today, Memory-Driven Computing reduces the time needed to process complex problems from days to hours, hours to minutes, minutes to seconds, to deliver real-time intelligence.

The current prototype Machine has its memory spread across 40 physical nodes, each interconnected using a high-performance fabric protocol, with an optimised Linux-based operating system (OS) running on ThunderX2. Photonics/optical communication links, including the new X1 photonics module, are online and operational. Software programming tools are designed to take full advantage of the abundant persistent memory.

"We think that this is a game-changer," said Kirk Bresniker, Chief Architect at HPE. "This will be the overarching arc for the next 10, 20, 30 years."

 

 

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

2nd May 2017

A neurotech future will require new human rights laws

New human rights laws are needed to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk, according to a paper from the Institute for Biomedical Ethics in Switzerland.

 

neurotechnology future timeline

 

New human rights laws to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk have been proposed in the open access journal Life Sciences, Society and Policy. The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are:

1. The right to cognitive liberty
2. The right to mental privacy
3. The right to mental integrity, and
4. The right to psychological continuity.

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: "The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology."

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

 

 

 

Professor Roberto Andorno, co-author of the research, explained: "Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court; for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for 'neuromarketing', to understand consumer behaviour and elicit desired responses from customers. There are also tools such as 'brain decoders' which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom, which we sought to address with the development of four new human rights laws."

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to 'eavesdrop' on someone's mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualisation of human rights laws and even the creation of new ones.

Marcello Ienca added: "Science fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

8th April 2017

Major breakthrough in smart printed electronics

For the first time, researchers have fabricated printed transistors consisting entirely of two-dimensional nanomaterials.

 

smart printed electronics future timeline
Credit: AMBER, Trinity College Dublin

 

Scientists from Advanced Materials and BioEngineering Research (AMBER) at Trinity College, Dublin, have fabricated printed transistors consisting entirely of 2-D nanomaterials for the first time. These materials combine new electronic properties with the potential for low-cost production.

This breakthrough could enable a range of new, futuristic applications – such as food packaging that displays a digital countdown to warn of spoiling, labels that alert you when your wine is at its optimum temperature, or even a window pane that shows the day's forecast. The AMBER team's findings were published yesterday in the leading journal Science.

This discovery opens the path for industry, such as ICT and pharmaceutical firms, to cheaply print a host of electronic devices, from solar cells to LEDs, with applications from interactive smart food and drug labels, to next-generation banknote security and e-passports.

Prof. Jonathan Coleman, an investigator in AMBER and Trinity's School of Physics, commented: "In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging."

 

 

A scene from Steven Spielberg's 2002 sci-fi thriller, Minority Report.

 

"Printed electronic circuitry (made from the devices we have created) will allow consumer products to gather, process, display and transmit information – for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date," he continued. "We believe that 2-D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2-D nanomaterials have the capability to yield more cost effective and higher performance printed devices.

"However, while the last decade has underlined the potential of 2-D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important, because it shows that conducting, semiconducting and insulating 2-D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors, as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2-D nanosheets."

Led by Prof. Coleman, in collaboration with the groups of Prof. Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft, Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor), to form an all-printed, all-nanosheet, working transistor.

 

smart printed electronics future timeline
Credit: AMBER, Trinity College Dublin

 

Printable electronics have developed over the last 30 years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2-D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2-D nanomaterials is based on Prof. Coleman's scalable method of producing 2-D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick, but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2-D materials in a form that is easy to process into inks. Prof. Coleman's publication provides the potential to print circuitry at extremely low cost, which will facilitate a wide range of applications from animated posters to smart labels.

Prof. Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

10th March 2017

IBM unveils roadmap for quantum computers

IBM has announced "IBM Q", an initiative to build commercially available universal quantum computing systems.

 

IBM quantum computer system
Credit: IBM Research

 

IBM has announced an industry-first initiative to build commercially available universal quantum computing systems. “IBM Q” systems and services will be delivered via the IBM Cloud platform. Current technologies that run on classical computers, such as Watson, can help to identify patterns and insights buried in vast amounts of existing data. By contrast, quantum computers will deliver solutions to important problems where patterns cannot be seen because the data doesn’t exist and the calculations needed to answer questions are too enormous to ever be processed by classical computers.

IBM is also launching a new Application Program Interface (API) for the “IBM Quantum Experience” enabling anyone with an Internet connection to use the quantum processor (via the Cloud) for running algorithms and experiments, working with individual quantum bits, and exploring tutorials and simulations of what might be possible with quantum computing. In the first half of 2017, IBM plans to release a full Software Development Kit (SDK) for users to build simple quantum applications and software programs.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries.”

 

IBM quantum computer system qubit
Credit: IBM Research

 

IBM intends to build IBM Q systems to expand the application domain of quantum computing. A key metric will be the power of a quantum computer expressed by the “Quantum Volume” – which includes the number of qubits, quality of operations, connectivity and parallelism. As a first step to increase Quantum Volume, IBM aims to build commercial IBM Q systems with around 50 qubits in the next few years to demonstrate capabilities beyond today’s classical systems, and plans to collaborate with key industry partners to develop applications that exploit the quantum speedup of the systems.

IBM Q systems will be designed to tackle problems that are currently too complex and exponential in nature for classical computing systems to handle. One of the first and most promising applications will be in the area of chemistry. Even for simple molecules like caffeine, the number of quantum states in the molecule can be astoundingly large; so complex that all the conventional computing memory and processing power scientists could ever build could not handle the problem.

IBM’s scientists have recently developed new techniques to efficiently explore the simulation of chemistry problems on quantum processors and experimental demonstrations of various molecules are in progress. In the future, the goal will be to scale to even more complex molecules and try to predict chemical properties with higher precision than possible with classical computers.

Future applications of quantum computing may include:

Artificial Intelligence: Making facets of artificial intelligence such as machine learning much more powerful when data sets can be too big such as searching images or video
Cloud Security: Making cloud computing more secure by using the laws of quantum physics to enhance private data safety
Drug & Materials Discovery: Untangling the complexity of molecular and chemical interactions leading to the discovery of new medicines and materials
Financial Services: Finding new ways to model financial data and isolating key global risk factors to make better investments
Supply Chain & Logistics: Finding the optimal path across global systems of systems for ultra-efficient logistics and supply chains, such as optimising fleet operations for deliveries during the holiday season

 

IBM quantum computer system qubit

 

“Classical computers are extraordinarily powerful and will continue to advance and underpin everything we do in business and society,” said Tom Rosamilia, senior vice president of IBM Systems. “But there are many problems that will never be penetrated by a classical computer. To create knowledge from much greater depths of complexity, we need a quantum computer. We envision IBM Q systems working in concert with our portfolio of classical high-performance systems to address problems that are currently unsolvable, but hold tremendous untapped value.”

IBM’s roadmap for scaling to practical quantum computers is based on a holistic approach to advancing all parts of the system. The company will leverage its deep expertise in superconducting qubits, complex high performance system integration, and scalable nanofabrication processes from the semiconductor industry to help advance the quantum mechanical capabilities. The developed software tools and environment will also leverage IBM’s world-class mathematicians, computer scientists, and software and system engineers.

"As Richard Feynman said in 1981, ‘…if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.’ This breakthrough technology has the potential to achieve transformational advancements in basic science, materials development, environmental and energy research, which are central to the missions of the Department of Energy (DOE),” said Steve Binkley, deputy director of science, US Department of Energy. “The DOE National Labs have always been at the forefront of new innovation, and we look forward to working with IBM to explore applications of their new quantum systems."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

7th February 2017

New technology could triple sharpness of displays

Researchers have developed a new blue-phase liquid crystal that could triple the sharpness of TVs, computer screens, and other displays while also reducing the power needed to run the device.

 

future display technology

 

An international team of researchers has developed a new blue-phase liquid crystal that could enable televisions, computer screens and other displays that pack more pixels into the same space while also reducing the power needed to run the device. The new liquid crystal is optimised for field-sequential colour liquid crystal displays (LCDs), a promising technology for next-generation displays.

"Today's Apple Retina displays have a resolution density of about 500 pixels per inch," said Shin-Tson Wu, who led the research team at the University of Central Florida's College of Optics and Photonics (CREOL). "With our new technology, a resolution density of 1500 pixels per inch could be achieved on the same sized screen. This is especially attractive for virtual reality headsets or augmented reality technology, which must achieve high resolution in a small screen to look sharp when placed close to our eyes."

Although the first blue-phase LCD prototype was demonstrated by Samsung in 2008, the technology still hasn't moved into production, because of problems with high operation voltage and slow capacitor charging time. To tackle these problems, Wu's research team worked with collaborators from liquid crystal manufacturer JNC Petrochemical Corporation in Japan and display manufacturer AU Optronics Corporation in Taiwan.

In the journal Optical Materials Express, the team explains how combining the new liquid crystal with a special performance-enhancing electrode structure can achieve light transmittance of 74 percent, with 15 volts per pixel – operational levels that could finally be practical for commercial applications.

"Field-sequential colour displays can be used to achieve the smaller pixels needed to increase resolution density," explains Yuge Huang, first author of the paper. "This is important, because the resolution density of today's technology is almost at its limit."

 

screen closeup

 

Today's LCD screens contain a thin layer of nematic liquid crystal through which the incoming white LED backlight is modulated. Thin-film transistors deliver the required voltage that controls light transmission in each pixel. The LCD subpixels contain red, green and blue filters that are used in combination to produce different colours to the human eye. The colour white is created by combining all three colours.

Blue-phase liquid crystal can be switched, or controlled, about 10 times faster than the nematic type. This sub-millisecond response time allows each LED colour (red, green and blue) to be sent through the liquid crystal at different times and eliminates the need for colour filters. The LED colours are switched so quickly that our eyes can integrate red, green and blue to form white.

"With colour filters, the red, green and blue light are all generated at the same time," said Wu. "However, with blue-phase liquid crystal, we can use one subpixel to make all three colours – but at different times. This converts space into time, a space-saving configuration of two-thirds, which triples the resolution density."

The blue-phase liquid crystal also triples the optical efficiency because the light doesn't have to pass through colour filters, which limit transmittance to about 30 percent. Another big advantage is that the displayed colour is more vivid because it comes directly from red, green and blue LEDs, which eliminates the colour crosstalk that occurs with conventional filters.

Wu's team worked with JNC to reduce the blue-phase liquid crystal's dielectric constant to a minimally acceptable range, to reduce the transistor charging time and get submillisecond optical response time. However, each pixel still needed slightly higher voltage than a single transistor could provide. To overcome this problem, the researchers implemented a protruded electrode structure that lets the electric field penetrate the liquid crystal more deeply. This lowered the voltage needed to drive each pixel while maintaining a high light transmittance.

"We achieved an operational voltage low enough to allow each pixel to be driven by a single transistor while also achieving a response time of less than a millisecond," said Haiwei Chen, a doctoral student in Wu's lab. "This delicate balance between operational voltage and response time is key for enabling field sequential colour displays."

Wu predicts that a working prototype could be available in the next year.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

3rd February 2017

Quantum computer blueprint published

Researchers led by the University of Sussex have published the first practical blueprint for how to build a large-scale quantum computer.

 

quantum computer blueprint

 

An international team, led by a scientist from the University of Sussex, has published the first practical blueprint for how to build a quantum computer – the most powerful computer in the world. This huge leap forward towards creating a universal quantum computer is detailed in the influential journal Science Advances.

It has long been known that such a computer would revolutionise industry, science and commerce on a similar scale as the invention of ordinary computers. But this new work features the actual industrial blueprint to construct such a large-scale machine, more powerful in solving certain problems than any computer ever built before.

Once operational, the computer's capabilities mean it would have the potential to answer many questions in science; solve the most mind-boggling scientific and mathematical problems; unravel some of the deepest mysteries of space; create revolutionary new medicines; and solve problems that an ordinary computer would take billions of years to compute.

The work features a new invention permitting actual quantum bits to be transmitted between individual quantum computing modules, in order to obtain a fully modular large-scale machine reaching nearly arbitrary large computational processing powers.

 

quantum computer blueprint

 

Previously, scientists had proposed using fibre optic connections to connect individual computer modules. The new invention introduces connections created by electric fields that allow charged atoms (ions) to be transported from one module to another. This new approach allows 100,000 times faster connection speeds between individual quantum computing modules compared to current state-of-the-art fibre link technology.

The new blueprint is the work of an international team of scientists from the University of Sussex (UK), Google (USA), Aarhus University (Denmark), RIKEN (Japan) and Siegen University (Germany).

Professor Winfried Hensinger, head of the Ion Quantum Technology Group at the University of Sussex, who has been leading this research, said: "For many years, people said that it was completely impossible to construct an actual quantum computer. With our work, we have not only shown that it can be done, but now we are delivering a nuts and bolts construction plan to build an actual large-scale machine."

Lead author Dr Bjoern Lekitsch, also from the University of Sussex, explains: "It was most important to us to highlight the substantial technical challenges as well as to provide practical engineering solutions."

As a next step, the team will construct a prototype quantum computer, based on this design, at the University.

 

quantum computer blueprint

 

This effort is part of the UK Government's £270m ($337m) plan to accelerate the introduction of quantum technologies into the marketplace. It makes use of a recent invention by the Sussex team that can replace billions of laser beams required for large-scale quantum computer operations with the simple application of voltages to a microchip.

"The availability of a universal quantum computer may have a fundamental impact on society as a whole," said Professor Hensinger. "Without doubt it is still challenging to build a large-scale machine, but now is the time to translate academic excellence into actual application building on the UK's strengths in this ground-breaking technology. I am very excited to work with industry and government to make this happen."

The computer's possibilities for solving, explaining or developing could be endless. However, its size will be anything but small. The machine is expected to fill a large building, consisting of sophisticated vacuum apparatus featuring integrated quantum computing silicon microchips that hold individual charged atoms (ions) using electric fields.

The blueprint to develop such computers has been made public to ensure scientists throughout the world can collaborate and further develop this awesome technology as well as to encourage industrial exploitation.

Note: All images courtesy of the University of Sussex

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

26th January 2017

Sci-fi holograms a step closer

Scientists at the Australian National University have invented a tiny device that creates the highest quality holographic images ever achieved, opening the door to 3D imaging technologies like those seen in Star Wars.

Lead researcher, Lei Wang, said the team created complex holographic images in infrared with the invention that could be developed with industry.

"As a child, I learned about the concept of holographic imaging from the Star Wars movies. It's really cool to be working on an invention that uses the principles of holography depicted in those movies," said Mr Wang, a PhD student at the ANU Research School of Physics and Engineering.

Holograms perform the most complex manipulations of light. They enable the storing and reproduction of all information carried by light in 3D. In contrast, standard photographs and computer monitors capture and display only a portion of 2D information.

 

science fiction hologram technology future timeline
Credit: Australian National University (ANU)

 

"While research in holography plays an important role in the development of futuristic displays and augmented reality devices, today we are working on many other applications such as ultra-thin and light-weight optical devices for cameras and satellites," said Wang.

Mr Wang explained that the device could replace bulky components to miniaturise cameras and save costs in astronomical missions by reducing the size and weight of optical systems on spacecraft. Co-lead researcher, Dr Sergey Kruk, said the device consisted of millions of tiny silicon "pillars", each up to 500 times thinner than a human hair.

"This new material is transparent, which means it loses minimal energy from the light, and it also does complex manipulations with light," said Dr Kruk from the ANU Research School of Physics and Engineering. "Our ability to structure materials at the nanoscale allows the device to achieve new optical properties that go beyond the properties of natural materials. The holograms we made demonstrate the strong potential of this technology to be used in a range of applications."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

19th January 2017

China to build first exascale supercomputer prototype in 2017

The Chinese government has announced plans for the first prototype exascale supercomputer by the end of the year.

 

china future supercomputer technology timeline 2017 2020

 

Last year, China unveiled the Sunway TaihuLight. With a peak performance of 125 petaflops, it became the world's fastest supercomputer, three times faster than the previous record holder, Tianhe-2, and five times faster than the Titan Cray Xk7 at Oak Ridge National Laboratory in the US.

Hoping to extend its global lead even further, China has this week announced plans for the first exascale supercomputer. In other words, a machine capable of making 1,000,000,000,000,000,000 (a quintillion) calculations per second; an order of magnitude faster than Sunway TaihuLight. A prototype version is expected to be ready by the end of 2017, but will not be fully operational until 2020. This is, however, consistent with China's 13th and latest Five-Year Plan, which includes the goal of developing such a project during the period 2016-2020.

“A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country’s first petaflop computer Tianhe-1, recognized as the world’s fastest in 2010,” said Zhang Ting, an engineer at Tianjin’s National Supercomputer Center, during an interview with state-run news agency, Xinhua.

Computing at the exascale will be a major milestone in software and hardware engineering. This generation of supercomputers – at some point between 2020 and 2030 – will be capable of simulating an entire human brain down to the level of individual neurons. In addition to revealing insights into neurological processes and mental disorders, this could also help in developing stronger AI.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

6th January 2017

IBM predicts five innovations for the next five years

IBM has unveiled its annual "5 in 5" – a list of ground-breaking innovations that will change the way people work, live, and interact during the next five years.

 

 

 

In 1609, Galileo invented the telescope and saw our cosmos in an entirely new way. He proved the theory that the Earth and other planets in our Solar System revolve around the Sun, which until then was impossible to observe. IBM Research continues this work through the pursuit of new scientific instruments – whether physical devices or advanced software tools – designed to make what's invisible in our world visible, from the macroscopic level down to the nanoscale.

"The scientific community has a wonderful tradition of creating instruments to help us see the world in entirely new ways. For example, the microscope helped us see objects too small for the naked eye, and the thermometer helped us understand the temperature of the Earth and human body," said Dario Gil, vice president of science & solutions at IBM Research. "With advances in artificial intelligence and nanotechnology, we aim to invent a new generation of scientific instruments that will make the complex invisible systems in our world today visible over the next five years."

Innovation in this area could dramatically improve farming, enhance energy efficiency, spot harmful pollution before it's too late, and prevent premature physical and mental decline. IBM's global team of scientists and researchers is steadily bringing these inventions from laboratories into the real world.

The IBM 5 in 5 is based on market and societal trends, as well as emerging technologies from research labs around the world that can make these transformations possible. Below are the five scientific instruments that will make the invisible visible in the next five years.

 


 

With AI, our words will open a window into our mental health

In five years, what we say and write will be used as indicators of our mental health and physical well-being. Patterns in our speech and writing analysed by new cognitive systems – including meaning, syntax and intonation – will provide tell-tale signs of early-stage developmental disorders, mental illness and degenerative neurological diseases to help doctors and patients better predict, monitor and track these conditions. What were once invisible signs will become clear signals of patients' likelihood of entering a certain mental state, or how well their treatment plan is working, complementing regular clinical visits with daily assessments from the comfort of their homes.

 

ibm five in five
Credit: IBM

 

 

 

Hyperimaging and AI will give us superhero vision

In five years, new imaging devices using hyperimaging technology and AI will help us "see" beyond visible light, by combining multiple bands of the electromagnetic spectrum. This will reveal valuable insights or potential dangers that may otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and widely accessible in our daily lives, giving us the ability to perceive or see through objects and opaque environmental conditions anytime, anywhere.

A view of invisible, or vaguely visible objects around us, could help make road and traffic conditions clearer for drivers and self-driving cars. For example, by using millimetre wave imaging, a camera and other electromagnetic sensors, hyperimaging technology could help a vehicle see through fog or rain, detect hazardous and hard-to-see road conditions such as black ice, or tell us if there is some object up ahead – as well as its distance and size. Cognitive computing technologies will reason about this data and recognise what might be a tipped over garbage can versus a deer crossing the road or a pot hole that could result in a flat tire.

 

ibm five in five
Credit: Lenovo

 

 

 

Macroscopes will help us understand Earth's complexity in infinite detail

Instrumenting and collecting masses of data from every source in the physical world, big and small, and bringing it together will reveal comprehensive solutions for our food, water and energy needs. Today, the physical world only gives us a glimpse into our highly interconnected and complex ecosystem. We collect exabytes of data – but most of it is unorganised. In fact, an estimated 80 percent of a data scientist's time is spent scrubbing data instead of analysing and understanding what that data is trying to tell us.

Thanks to the Internet of Things (IoT), new sources of data are pouring in from millions of connected objects – from refrigerators, light bulbs and heart rate monitors, to remote sensors such as drones, cameras, weather stations, satellites and telescope arrays. There are already more than six billion connected devices generating tens of exabytes of data per month, with a growth rate of over 30% each year. After successfully digitising information, business transactions and social interactions, we are now in the process of digitising the physical world.

By 2022, we will use machine learning algorithms and software to organise the information about the physical world, bringing the vast and complex data gathered by billions of devices within the range of our vision and understanding. IBM calls this idea a "macroscope" – but unlike microscopes to see the very small, or telescopes that can see far away, this will be a system to gather all of Earth's complex data together to analyse it for meaning.

By aggregating, organising and analysing data on climate, soil conditions, water levels and their relationship to irrigation practices, for example, a new generation of farmers will have insights that help them determine the right crop choices, where to plant them and how to produce optimal yields while conserving precious water supplies.

 

internet of things agriculture

 

 

 

Medical labs "on a chip" will serve as health detectives for tracing disease at the nanoscale

In five years, new medical labs on a chip will serve as nanotechnology health detectives – tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyse a disease that would normally be carried out in a full-scale biochemistry lab.

Lab-on-a-chip technology will eventually be packaged in a handheld device. This will allow people to quickly and regularly measure the presence of biomarkers found in small amounts of bodily fluids – such as saliva, tears, blood and sweat – sending this information securely into the cloud from the comfort of their home. There it will be combined with real-time health data from other IoT-enabled devices, like sleep monitors and smart watches, and analysed by AI systems for insights. Taken together, this data will give an in-depth view of our health, alerting us to the first signs of trouble – helping to stop disease before it progresses.

IBM scientists are developing nanotechnology that can separate and isolate bioparticles down to 20 nanometres in diameter, a scale that gives access to DNA, viruses, and exosomes. These particles could be analysed to potentially reveal the presence of disease even before we have symptoms.

 

IBM five in five
Medical lab on a chip. Credit: IBM

 

 

 

Smart sensors will detect environmental pollution at the speed of light

In five years, new sensing technologies deployed near natural gas extraction wells, around storage facilities, and along distribution pipelines will enable the industry to pinpoint invisible leaks in real-time. Networks of IoT sensors wirelessly connected to the cloud will provide continuous monitoring of natural gas infrastructure, allowing leaks to be found in a matter of minutes instead of weeks, reducing pollution and waste and the likelihood of catastrophic events.

IBM is researching silicon photonics – an emerging technology that transfers data by light, for computing literally at the speed of light. These chips could be embedded in a network of sensors on the ground or within infrastructure, or even fly on autonomous drones; generating insights that, combined with real-time wind data, satellite data, and other historical sources, will produce complex environmental models to detect the origin and quantity of pollutants as they occur.

 

smart sensor
Credit: IBM

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

5th December 2016

Construction of practical quantum computers radically simplified

Scientists at the University of Sussex have invented a ground-breaking new method that puts the construction of large-scale quantum computers within reach of current technology.

Quantum computers could solve certain problems – that would take the fastest supercomputer millions of years to calculate – in just a few milliseconds. They have the potential to create new materials and medicines, as well as solve long-standing scientific and financial problems.

Universal quantum computers can be built in principle, but the technological challenges are tremendous. The engineering required to build one is considered more difficult than manned space travel to Mars – until now.

Quantum computing experiments on a small scale using trapped ions (charged atoms) are carried out by aligning individual laser beams onto individual ions with each ion forming a quantum bit. However, a large-scale quantum computer would need billions of quantum bits, therefore requiring billions of precisely aligned lasers, one for each ion.

Instead, scientists at the University of Sussex have invented a simple method where voltages are applied to a quantum computer microchip (without having to align laser beams) – to the same effect. The team also succeeded in demonstrating the core building block of this new method with an impressively low error rate.

 

quantum computer future timeline
Credit: University of Sussex

 

"This development is a game changer for quantum computing making it accessible for industrial and government use," said Professor Winfried Hensinger, who heads the Ion Quantum Technology Group at the university and is director of the Sussex Centre for Quantum Technologies. "We will construct a large-scale quantum computer at Sussex making full use of this exciting new technology."

Quantum computers may revolutionise society in a similar way as the emergence of classical computers. "Developing this step-changing new technology has been a great adventure and it is absolutely amazing observing it actually work in the laboratory," said Hensinger's colleague, Dr Seb Weidt.

The Ion Quantum Technology Group forms part of the UK's National Quantum Technology Programme, a £270 million investment by the government to accelerate the introduction of quantum technologies into the marketplace.

A paper on this latest research, 'Trapped-ion quantum logic with global radiation fields', is published in the journal Physical Review Letters.

 

quantum computer future timeline
Professor Winfried Hensinger (left) and Dr Seb Weidt (right).

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

1st December 2016

Almost half of tech professionals expect their job to be automated within ten years

45% of technology professionals believe a significant part of their job will be automated by 2027 – rendering their current skills redundant. Changes in technology are so rapid that 94% say their career would be severely limited if they didn't teach themselves new technical skills.

That's according to the Harvey Nash Technology Survey 2017, representing the views of more than 3,200 technology professionals from 84 countries.

The chance of automation varies greatly with job role. Testers and IT Operations professionals are most likely to expect their job role to be significantly affected in the next decade (67% and 63% respectively). Chief Information Officers (CIOs), Vice Presidents of Information Technology (VP IT) and Programme Managers expect to be least affected (31% and 30% respectively).

David Savage, associate director, Harvey Nash UK, commented: "Through automation, it is possible that ten years from now the Technology team will be unrecognisable in today's terms. Even for those roles relatively unaffected directly by automation, there is a major indirect effect – anything up to half of their work colleagues may be machines by 2027."

 

future timeline automation technology 2027

 

In response to automation technology, professionals are prioritising learning over any other career development tactics. Self-learning is significantly more important to them than formal training or qualifications; only 12 per cent indicate "more training" as a key thing they want in their job and only 27% see gaining qualifications as a top priority for their career.

Despite the increase in automation, the survey reveals that technology professionals remain in high demand, with participants receiving at least seven headhunt calls in the last year. Software Engineers and Developers are most in demand, followed by Analytics / Big Data roles. Respondents expect the most important technologies in the next five years to be Artificial Intelligence, Augmented / Virtual Reality and Robotics, as well as Big Data, Cloud and the Internet of Things. Unsurprisingly, these are also the key areas cited in what are the "hot skills to learn".

"Technology careers are in a state of flux," says Simon Hindle, a director at Harvey Nash Switzerland. "On one side, technology is 'eating itself', with job roles increasingly being commoditised and automated. On the other side, new opportunities are being created, especially around Artificial Intelligence, Big Data and Automation. In this rapidly changing world, the winners will be the technology professionals who take responsibility for their own skills development, and continually ask: 'where am I adding value that no other person – or machine – can add?'"

 

future timeline automation technology 2027

 

Key highlights from the Harvey Nash Technology Survey 2017:

AI growth: The biggest technology growth area is expected to be Artificial Intelligence (AI). 89% of respondents expect it to be important to their company in five years' time, almost four times the current figure of 24%.

Big Data is big, but still unproven. 57% of organisations are implementing Big Data at least to some extent. For many, it is moving away from being an 'experiment' into something more core to their business; 21% say they are using it in a 'strategic way'. However, only three in ten organisations with a Big Data strategy are reporting success to date.

Immigration is key to the tech industry, and Brexit is a concern. The technology sector is overwhelmingly in favour of immigration; 73% believe it is critical to their country’s competitiveness. 33% of respondents to the survey were born outside the country they are currently working. Almost four in ten tech immigrants in the UK are from Europe, equating to one in ten of the entire tech working population in the UK. Moreover, UK workers make up over a fifth of the tech immigrant workforce of Ireland and Germany.

Where are all the women? This year's report reveals that 16% of respondents are women; not very different from the 13% who responded in 2013. The pace of change is glacial and – at this rate – it will take decades before parity is reached.

Tech people don't trust the cloud. Four in ten have little or no trust in how cloud companies are using their personal data, while five in ten at least worry about it. Trust in the cloud is affected by age (the older you are, the less you trust).

The end of the CIO role? Just 3% of those under 30 aspire to be a CIO; instead they would prefer to be a CTO (14% chose this), entrepreneur (19%) or CEO (11%). This suggests that the traditional role of the CIO is relatively unattractive to Gen Y.

Headhunters' radar: Software Engineers and Developers get headhunted the most, followed closely by Analytics / Big Data roles. At the same time, 75% believe recruiters are too focused on assessing technical skills, and overlook good people as a result.

 

cloud computing future timeline technology 2027

 

 

Supporting data from the survey (global averages):

 

Which technologies are important to your company now, and which do you expect to be important in five years' time?

job automation future of work

 

Agree or disagree? Within ten years, a significant part of my job that I currently perform will be automated.

job automation future of work

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

3rd November 2016

A virus-sized computing device

Researchers at University of California, Santa Barbara, have designed a functional nanoscale computing element that could be packed into a space no bigger than 50 nanometres on any side.

 

red blood cell nanotechnology nanotech future timeline

 

In 1959, renowned physicist Richard Feynman, in his talk “Plenty of Room at the Bottom” spoke of a future in which tiny machines could perform huge feats. Like many forward-looking concepts, his molecule and atom-sized world remained for years in the realm of science fiction. And then, scientists and other creative thinkers began to realise Feynman’s nanotechnological visions.

In the spirit of Feynman’s insight, and in response to the challenges he issued, electrical and computer engineers at UC Santa Barbara have developed a design for a functional nanoscale computing device. The concept involves a dense, three-dimensional circuit operating on an unconventional type of logic that could, theoretically, be packed into a block no bigger than 50 nanometres on any side.

“Novel computing paradigms are needed to keep up with demand for faster, smaller and more energy-efficient devices,” said Gina Adam, a postdoctoral researcher at UCSB’s Department of Electrical and Computer Engineering and lead author of the paper “Optimised stateful material implication logic for three dimensional data manipulation” published in the journal Nano Research. “In a regular computer, data processing and memory storage are separated, which slows down computation. Processing data directly inside a three-dimensional memory structure would allow more data to be stored and processed much faster.”

While efforts to shrink computing devices have been ongoing for decades – in fact, Feynman’s challenges as he presented them in 1959 have been met – scientists and engineers continue to carve out room at the bottom for even more advanced nanotechnology. An 8-bit adder operating in 50 x 50 x 50 nanometre dimensions, put forth as part of the current Feynman Grand Prize challenge by the Foresight Institute, has not yet been achieved. However, the continuing development and fabrication of progressively smaller components is bringing this virus-sized computing device closer to reality.

“Our contribution is that we improved the specific features of that logic and designed it so it could be built in three dimensions,” says Dmitri Strukov, UCSB professor of computer science.

 

nanoscale computer device nanotechnology future timeline

 

Key to this development is a system called material implication logic, combined with memristors – circuit elements whose resistance depends on the most recent charges and the directions of those currents that have flowed through them. Unlike the conventional computing logic and circuitry found in our present computers and other devices, in this form of computing, logic operation and information storage happen simultaneously and locally. This greatly reduces the need for components and space typically used to perform logic operations and to move data back and forth between operation and memory storage. The result of the computation is immediately stored in a memory element, which prevents data loss in the event of power outages – a critical function in autonomous systems such as robotics.

In addition, the researchers reconfigured the traditionally two-dimensional architecture of the memristor into a three-dimensional block, which could then be stacked and packed into the space required to meet the Feynman Grand Prize Challenge.

“Previous groups show that individual blocks can be scaled to very small dimensions,” said Strukov, who worked at technology company Hewlett-Packard’s labs when they ramped up development of memristors. By applying those results to his group’s developments, he said, the challenge could easily be met.

Memristors are being heavily researched in academia and in industry for their promising uses in future memory storage and neuromorphic computing. While implementations of material implication logic are rather exotic and not yet mainstream, uses for it could pop up any time, particularly in energy-scarce systems such as robotics and medical implants.

“Since this technology is still new, more research is needed to increase its reliability and lifetime and to demonstrate large-scale, 3-D circuits tightly packed in tens or hundreds of layers,” Adam said.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

3rd November 2016

1,000-fold increase in 3-D scanning speed

Researchers at Penn State University report a 1,000-fold increase in the scanning speed for 3-D printing, using a space-charge-controlled KTN beam deflector with a large electro-optic effect.

 

3d printer scanner future timeline

 

A major technological advance in the field of high-speed beam-scanning devices has resulted in a speed boost of up to 1000 times, according to researchers in Penn State's College of Engineering. Using a space-charge-controlled KTN beam deflector – a kind of crystal made of potassium tantalate and potassium niobate – with a large electro-optic effect, researchers have found that scanning at a much higher speed is possible.

"When the crystal materials are applied to an electric field, they generate uniform reflecting distributions, that can deflect an incoming light beam," said Professor Shizhuo Yin, from the School of Electrical Engineering and Computer Science. "We conducted a systematic study on indications of speed and found out the phase transition of the electric field is one of the limiting factors."

To overcome this issue, Yin and his team of researchers eliminated the electric field-induced phase transition in a nanodisordered KTN crystal by making it work at a higher temperature. They not only went beyond the Curie temperature (at which certain materials lose their magnetic properties, replaced by induced magnetism), they went beyond the critical end point (in which a liquid and its vapour can co-exist).

 

3d printer scanner future timeline

Credit: Penn State

 

This increased the scanning speed from the microsecond range to the nanosecond range, and led to improved high-speed imaging, broadband optical communications and ultrafast laser display and printing. The researchers believe this could lead to a new generation of 3-D printers, with objects that once took an hour to print now taking a matter of seconds.

Yin said technology like this would be especially useful in the medical industry – high-speed imaging will now be possible in real-time. For example, optometrists who use a non-invasive test that uses light waves to take cross-section pictures of a person's retina, would be able to have a 3-D image of their patients' retinas as they are performing the surgery, so they can see what needs to be corrected during the procedure.

The group's findings are published in the journal Nature Scientific Reports.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st October 2016

AI milestone: a new system can match humans in conversational speech recognition

A new automated system that can achieve parity and even beat humans in conversational speech recognition has been announced by researchers at Microsoft.

 

AI conversational speech recognition future timeline

 

A team at Microsoft's Artificial Intelligence and Research group has published a study in which they demonstrate a technology that recognises spoken words in a conversation as well as a real person does.

Last month, the same team achieved a word error rate (WER) of 6.3%. In their new paper this week, they report a WER of just 5.9%, which is equal to that of professional transcriptionists and is the lowest ever recorded against the industry standard Switchboard speech recognition task.

“We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.”

“Even five years ago, I wouldn’t have thought we could have achieved this,” said Harry Shum, the group's executive vice president. “I just wouldn’t have thought it would be possible.”

Microsoft has been involved in speech recognition and speech synthesis research for many years. The company developed Speech API in 1994 and later introduced speech recognition technology in Office XP and Office 2003, as well as Internet Explorer. However, the word error rates for these applications were much higher back then.

 

speech recognition trend future timeline

 

In their new paper, the researchers write: "the key to our system's performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training."

The team used Microsoft’s own Computational Network Toolkit – an open source, deep learning framework. This was able to process deep learning algorithms across multiple computers, running a specialised GPU to greatly improve its speed and enhance the quality of research. The team believes their milestone will have broad implications for both consumer and business products, including entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana.

“This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.

“The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group.

Future improvements may also include speech recognition that works well in more real-life settings – places with lots of background noise, for example, such as at a party or while driving on the highway. The technology will also become better at assigning names to individual speakers when multiple people are talking, as well as working with a wide variety of voices, regardless of age, accent or ability.

The full study – Achieving Human Parity in Conversational Speech Recognition – is available at: https://arxiv.org/abs/1610.05256

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

20th October 2016

Quantum computers: 10-fold boost in stability achieved

A team at Australia's University of New South Wales has created a new quantum bit that remains in a stable superposition for 10 times longer than previously achieved.

 

quantum computers stability breakthrough future timeline
Credit: Arne Laucht/UNSW

 

Australian engineers have created a new quantum bit which remains in a stable superposition for 10 times longer than previously achieved, dramatically expanding the time during which calculations could be performed in a silicon quantum computer.

The new quantum bit, consisting of the spin of a single atom in silicon and merged with an electromagnetic field – known as 'dressed qubit' – retains quantum information for much longer than 'undressed' atoms, opening up new avenues to build and operate the superpowerful quantum computers of the future.

"We have created a new quantum bit where the spin of a single electron is merged together with a strong electromagnetic field," comments Arne Laucht from the School of Electrical Engineering & Telecommunications at University of New South Wales (UNSW), lead author of the paper. "This quantum bit is more versatile and more long-lived than the electron alone, and will allow us to build more reliable quantum computers."

Building a quantum computer is a difficult and ambitious challenge, but has potential to deliver revolutionary tools for otherwise impossible calculations – such as the design of complex drugs and advanced materials, or the rapid search of massive, unsorted databases. Its speed and power lie in the fact that quantum systems can host multiple 'superpositions' of different initial states, which in a computer are treated as inputs which, in turn, all get processed at the same time.

"The greatest hurdle in using quantum objects for computing is to preserve their delicate superpositions long enough to allow us to perform useful calculations," said Andrea Morello, Program Manager in the Centre for Quantum Computation & Communication Technology at UNSW. "Our decade-long research program had already established the most long-lived quantum bit in the solid state, by encoding quantum information in the spin of a single phosphorus atom inside a silicon chip placed in a static magnetic field," he said.

What Laucht and colleagues did was push this further: "We have now implemented a new way to encode the information: we have subjected the atom to a very strong, continuously oscillating electromagnetic field at microwave frequencies, and thus we have 'redefined' the quantum bit as the orientation of the spin with respect to the microwave field."

 

quantum computers stability breakthrough future timeline
Tuning gates (red), microwave antenna (blue), and single electron transistor used for spin readout (yellow).
Credit: Guilherme Tosi & Arne Laucht/UNSW

 

The results are striking: since the electromagnetic field steadily oscillates at a very high frequency, any noise or disturbance at a different frequency results in a zero net effect. The UNSW researchers achieved an improvement by a factor of 10 in the time span during which a quantum superposition can be preserved, with a dephasing time of T2*=2.4 milliseconds.

"This new 'dressed qubit' can be controlled in a variety of ways that would be impractical with an 'undressed qubit'," adds Morello. "For example, it can be controlled by simply modulating the frequency of the microwave field, just like an FM radio. The 'undressed qubit' instead requires turning the amplitude of the control fields on and off, like an AM radio. In some sense, this is why the dressed qubit is more immune to noise: the quantum information is controlled by the frequency, which is rock-solid, whereas the amplitude can be more easily affected by external noise."

Since the device is built upon standard silicon technology, this result paves the way to the construction of powerful and reliable quantum processors based on the same fabrication process already used for today's computers. The UNSW team leads the world in developing silicon quantum computing and Morello's team is part of a consortium who have struck a A$70 million deal between UNSW, researchers, business, and the Australian government to develop a prototype silicon quantum integrated circuit – a major step in building the world's first quantum computer in silicon.

A functional quantum computer would allow massive increases in speed and efficiency for certain computing tasks – even when compared with today's fastest silicon-based 'classical' computers. In a number of key areas – such as searching enormous databases, solving complicated sets of equations, and modelling atomic systems such as biological molecules or drugs – they would far surpass today's computers. They would also be extremely useful in the finance and healthcare industries, and for government, security and defence organisations.

Quantum computers could identify and develop new medicines by vastly accelerating the computer-aided design of pharmaceutical compounds (minimising lengthy trial and error testing), and develop new, lighter and stronger materials spanning consumer electronics to aircraft. They would also make possible new types of computing applications and solutions that are beyond our ability to foresee.

The UNSW study appears this week in the peer-reviewed journal, Nature Nanotechnology.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

19th October 2016

Large-scale deployment of body-worn cameras for London police

The Metropolitan Police Service (MPS) is taking a global lead with what is believed to be the largest rollout of body-worn cameras by police anywhere in the world to enhance the service it gives to London.

 

body worn cameras london police
Credit: Met Police

 

This week sees the beginning of a large-scale deployment of Body Worn Video (BWV) which is being issued to more than 22,000 frontline police officers in the British capital. The Met Commissioner, Sir Bernard Hogan-Howe, was joined in Lewisham by the London Mayor, Sadiq Khan, to witness the rollout of the cameras, which follows a successful trial and wide-ranging public consultation and academic evaluation. Over the coming months, cameras will be issued to all 32 London boroughs and a number of frontline specialist roles, including overt firearms officers.

The devices have already shown they can bring speedier justice for victims. This has proved particularly successful in domestic abuse cases, which have seen an increase in earlier guilty pleas from offenders who know their actions have been recorded. The technology offers greater transparency for those in front of the camera as well as behind it. Londoners can feel reassured during their interactions with police, whilst allowing officers to demonstrate professionalism in many challenging and contentious interactions, such as the use of stop and search.

All footage recorded on BWV is subject to legal safeguards and guidance. Footage from the Axon Body Camera is automatically uploaded to secure servers once the device has been docked, and flagged for use as evidence at court or other proceedings. Video not retained as evidence or for a policing purpose is automatically deleted within 31 days. If the public wish to view footage taken of them they can request, in writing, to obtain it under freedom of information laws. It must be within 31 days, unless it has been marked as policing evidence and therefore retained.

The cameras will be worn attached to the officer's uniform and will not be permanently recording. This ensures that officers' interactions with the public are not unnecessarily impeded. Members of the public will be informed as soon as practical that they are being recorded. It is highly visible in any case, with a flashing red circle in the centre of the camera and a frequent beeping noise when the device is activated.

The interactive graphic below explains how body worn video will be used.

 

 

Mayor of London, Sadiq Khan, said: "Body Worn Video is a huge step forward in bringing our capital's police force into the 21st century and encouraging trust and confidence in community policing. This technology is already helping drive down complaints against officers and making them more accountable, as well as helping to gather better evidence for swifter justice."

Metropolitan Police Commissioner, Sir Bernard Hogan-Howe: "Body Worn Video will support our officers in the many challenging situations they have to deal with, at the same time as building the public's confidence. Our experience of using cameras already shows that people are more likely to plead guilty when they know we have captured the incident on a camera. That then speeds up justice, puts offenders behind bars more quickly and most importantly protects potential victims. Video captures events in a way that can't be represented on paper in the same detail – a picture paints a thousand words, and it has been shown the mere presence of this type of video can often defuse potentially violent situations without the need for force to be used."

Last month, a study published by the University of Cambridge found that body-worn cameras led to a 93% drop in complaints made against police by the UK and US public, suggesting the cameras result in behavioural changes that ‘cool down’ potentially volatile encounters. A similar study in 2014 found that officers wearing cameras witnessed a 59% drop in their use-of-force, while complaints against them fell by 87% compared to the previous year.

The deployment of all 22,000 cameras in London will be managed in a phased approach and is expected to be complete by next summer.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

12th October 2016

Scientists create the smallest ever transistor – just a single nanometre long

Researchers at the Department of Energy's Lawrence Berkeley National Laboratory have demonstrated a working 1 nanometre (nm) transistor.

 

1 nanometre transistor future timeline
Credit: Sujay Desai/UC Berkeley

 

For more than a decade, engineers have been eyeing the finish line in the race to shrink the size of components in integrated circuits. They knew that the laws of physics had set a 5-nanometre threshold on the size of transistor gates among conventional semiconductors, about one-third the size of high-end 14-nanometre-gate transistors currently on the market.

However, some laws are made to be broken, or at least challenged.

A research team led by faculty scientist Ali Javey at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has done just that by creating a transistor with a functioning 1-nanometre gate. For comparison, a strand of human hair is about 50,000 nanometres thick.

"We made the smallest transistor reported to date," said Javey, lead principal investigator of the Electronic Materials program in Berkeley Lab's Materials Science Division. "The gate length is considered a defining dimension of the transistor. We demonstrated a 1-nanometre-gate transistor, showing that with the choice of proper materials, there is a lot more room to shrink our electronics."

The key was to use carbon nanotubes and molybdenum disulfide (MoS2), an engine lubricant commonly sold in auto parts shops. MoS2 is part of a family of materials with immense potential for applications in LEDs, lasers, nanoscale transistors, solar cells, and more.

This breakthrough could help in keeping alive Intel co-founder Gordon Moore's prediction that the density of transistors on integrated circuits would double every two years, enabling the increased performance of our laptops, mobile phones, televisions, and other electronics.

 

moores law

 

"The semiconductor industry has long assumed that any gate below 5 nanometres wouldn't work – so anything below that was not even considered," said study lead author Sujay Desai, a graduate student in Javey's lab. "This research shows that sub-5-nanometre gates should not be discounted. Industry has been squeezing every last bit of capability out of silicon. By changing the material from silicon to MoS2, we can make a transistor with a gate that is just 1 nanometre in length, and operate it like a switch."

"This work demonstrated the shortest transistor ever," said Javey, who is also a UC Berkeley professor of electrical engineering and computer sciences. "However, it's a proof of concept. We have not yet packed these transistors onto a chip, and we haven't done this billions of times over. We also have not developed self-aligned fabrication schemes for reducing parasitic resistances in the device. But this work is important to show that we are no longer limited to a 5-nanometre gate for our transistors. Moore's Law can continue a while longer by proper engineering of the semiconductor material and device architecture."

His team's research is published this month in the peer-reviewed journal Science.

 

1 nanometre transistor future timeline
Credit: Qingxiao Wang/UT Dallas

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st September 2016

1 terabit per second achieved in optical fibre trial

Terabit-per-second data transmission using a novel modulation approach in optical fibre has been announced by researchers in Germany.

 

terabit internet connection speed 2016

 

Nokia Bell Labs, Deutsche Telekom T-Labs and the Technical University of Munich have achieved unprecedented transmission capacity and spectral efficiency in an optical communications field trial with a new modulation technique. This breakthrough could extend the capability of optical networks to meet surging data traffic demands in the future.

Their research has shown that the flexibility and performance of optical networks can be maximised when adjustable transmission rates are dynamically adapted to channel conditions and traffic demands. As part of the Safe and Secure European Routing (SASER) project, the experiment over a deployed optical fibre network achieved a net 1 terabit (TB) transmission rate. This is close to the theoretical maximum information transfer rate of that channel and thus approaching the Shannon Limit discovered in 1948 by Claude Shannon, the "father of information theory."

The trial of this novel modulation approach – known as Probabilistic Constellation Shaping (PCS) – uses quadrature amplitude modulation (QAM) formats to achieve higher transmission capacity over a given channel, to significantly improve the spectral efficiency of optical communications. PCS modifies the probability with which constellation points (the alphabet of the transmission) are used. Traditionally, all constellation points use the same frequency. However, PCS cleverly uses constellation points with high amplitude less frequently than those with lesser amplitude, sending signals that are overall more resilient to noise and other potential disruption. This allows the data transmission rate to be tailored to ideally fit the transmission channel, delivering up to 30% greater reach.

This research is a key milestone in proving that PCS could be used in the future to improve optical communications. With 5G wireless technology forecast to emerge by 2020, today's optical transport systems must evolve to meet the exponentially growing demand of network data traffic, increasing at a cumulative annual rate of 100%. PCS is now part of this evolution, allowing increases in optical fibre flexibility and performance that will move data traffic faster and over greater distances without increasing the network complexity.

Marcus Weldon, President of Nokia Bell Labs and the Chief Technology Officer, commented: "Future optical networks not only need to support orders of magnitude higher capacity, but also the ability to dynamically adapt to channel conditions and traffic demand. Probabilistic Constellation Shaping offers great benefits to service providers and enterprises, by enabling optical networks to operate closer to the Shannon Limit to support massive datacentre interconnectivity and provide the flexibility and performance required for modern networking in the digital era."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st September 2016

World's first 1 terabyte SD card is announced

Hard drive manufacturer Western Digital has announced the first 1 terabyte capacity SD card at Photokina 2016.

 

worlds first 1 terabyte sd card announced 2016

 

Western Digital Corporation (WDC), which acquired SanDisk for US$19 billion in May, has unveiled a 1 terabyte (TB) SDXC card prototype at the world's leading trade fair for photo and video professionals. With ever-increasing demand for high resolution content, such as 4K and 8K, the company continues to push the boundaries of technology and to demonstrate the power of exponential growth.

"Showcasing the most advanced imaging technologies is truly exciting for us," said Dinesh Bahal, Vice President of Product Management. "16 years ago we introduced the first SanDisk 64MB SD card and today we are enabling capacities of 1TB. Over the years, our goal has remained the same: continue to innovate and set the pace for the imaging industry. The SanDisk 1TB SD card prototype represents another significant achievement as growth of high-resolution content and capacity-intensive applications such as virtual reality, video surveillance and 360 video, are progressing at astounding rates."

Since the introduction of a record-breaking 512GB memory card at Photokina 2014, Western Digital has proven it can nearly double the capacity in the same SD card form factor using proprietary technology. Higher capacity cards expand the possibilities for professional videographers and photographers, giving them even greater ability to create more of the highest quality content, without the interruption of changing cards.

"Just a few short years ago, the idea of a 1TB capacity point in an SD card seemed so futuristic," said Sam Nicholson, CEO of Stargate Studios and a member of the American Society of Cinematographers. "It's amazing that we're now at the point where it's becoming a reality. With growing demand for applications like VR, we can certainly use 1TB when we're out shooting continuous high-quality video. High-capacity cards allow us to capture more without interruption – streamlining our workflow, and eliminating the worry that we may miss a moment because we have to stop to swap out cards."

Western Digital will be demonstrating the SanDisk 1TB card prototype and showcasing its newest offerings at Photokina, Hall 02.1 Stand A014.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 
     
       
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed

Privacy Policy