future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
 
       
 
 
 

Blog » Computers & the Internet

 
     
 

14th August 2017

100 times faster Wi-Fi may soon be possible

Researchers at Brown University report the transmission of data through a terahertz multiplexer at 50 gigabits per second, which could lead to a new generation of ultra-fast Wi-Fi.

 

wifi speed exponential future technology
Credit: Mittleman lab / Brown University

 

Multiplexing, the ability to send multiple signals through a single channel, is a fundamental feature of any voice or data communication system. A team of researchers has now demonstrated a method for multiplexing data carried on terahertz waves – high-frequency radiation that could enable the next generation of ultra-high bandwidth wireless networks.

Writing in the journal Nature Communications, they describe the transmission of two real-time video signals through a terahertz multiplexer at an aggregate data rate of 50 gigabits per second, 100 times the optimal data rate of today's fastest cellular network.

"We showed that we can transmit separate data streams on terahertz waves at very high speeds and with very low error rates," said Daniel Mittleman, professor in Brown's University's School of Engineering and the paper's corresponding author. "This is the first time anybody has characterised a terahertz multiplexing system using actual data, and our results show that our approach could be viable in future terahertz wireless networks."

Current voice and data networks use microwaves to carry signals wirelessly. But like most forms of information technology, the demand for data transmission is growing exponentially, and quickly becoming more than microwave networks can handle. Terahertz waves have higher frequencies than microwaves and therefore a much larger capacity to carry data. However, scientists have only just begun experimenting with terahertz frequencies and many of the basic components needed for such communication don't exist yet.

A system for multiplexing and demultiplexing (also known as mux/demux) is one of those basic components. It's a technology that allows one cable to carry multiple TV channels, or hundreds of users to access a Wi-Fi network.

 

wifi speed exponential future technology

 

The mux/demux approach Mittleman and his colleagues developed uses two metal plates placed parallel to each other to form a waveguide, as shown in the illustration below. One plate has a slit cut into it. When a terahertz wave travels through the waveguide, some of the radiation leaks out of the slit. The angle at which radiation beams escape is dependent upon the frequency of the wave.

"We can put several waves at several different frequencies – each of them carrying a data stream – into the waveguide, and they won't interfere with each other because they're different frequencies; that's multiplexing," Mittleman said. "Each of those frequencies leaks out of the slit at a different angle, separating the data streams; that's demultiplexing."

Due to the nature of terahertz waves, signals in terahertz communications networks will propagate as directional beams, not omnidirectional broadcasts like in existing wireless systems. This directional relationship between propagation angle and frequency is key to enabling mux/demux in terahertz systems. A user at a particular location (and therefore at a particular angle from the multiplexing system) will communicate on a particular frequency.

 

wifi speed exponential future technology
Credit: Mittleman lab / Brown University

 

In 2015, Mittleman's team first published a paper describing their waveguide concept. For that initial work, they used a broadband terahertz light source to confirm that different frequencies did indeed emerge from the device at different angles. While that was an effective proof of concept, this latest work took the critical step of testing the device with real data.

The team encoded two high-definition television broadcasts, beamed together into the multiplexer system. Their experiments showed that transmissions were error-free up to 10 gigabits per second, which is much faster than today's standard Wi-Fi speeds. Error rates increased somewhat when the speed was boosted to 50 gigabits per second (25 gigabits per channel), but were still well within the range that can be fixed using forward error correction, which is commonly used in today's communications networks.

The researchers plan to continue developing this and other terahertz components. Mittleman recently received a license from the FCC to perform outdoor tests at terahertz frequencies on the Brown University campus.

"We think that we have the highest-frequency license currently issued by the FCC, and we hope it's a sign that the agency is starting to think seriously about terahertz communication," he said. "Companies are going to be reluctant to develop terahertz technologies until there's a serious effort by regulators to allocate frequency bands for specific uses, so this is a step in the right direction."

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

28th June 2017

"Mind reading" technology can decode complex thoughts

New research builds on the pioneering use of machine learning algorithms with brain imaging technology to "mind read." For the first time, thoughts containing several concepts can be decoded.

 

mind reading technology

 

Carnegie Mellon University scientists can now use brain activation patterns to identify complex thoughts, such as, "The witness shouted during the trial."

This latest research, led by CMU's Marcel Just, builds on the pioneering use of machine learning algorithms with brain imaging technology to "mind read." The findings indicate that the mind's building blocks for constructing complex thoughts are formed by the brain's various sub-systems and are not word-based. Published in Human Brain Mapping and funded by the Intelligence Advanced Research Projects Activity (IARPA), this study offers new evidence that the neural dimensions of concept representation are universal across people and languages.

"One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of 'bananas,' but 'I like to eat bananas in evening with my friends,'" said Just, a Professor of Psychology in the Dietrich College of Humanities and Social Sciences. "We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of."

Previous work by Just and his team showed that thoughts of familiar objects, like bananas or hammers, evoke activation patterns that involve the neural systems we use to deal with those objects. For example, how you interact with a banana involves how you hold it, how you bite it and what it looks like.

The new study demonstrates that the brain's coding of 240 complex events, sentences like the shouting during the trial scenario uses an alphabet of 42 meaning components, or neurally plausible semantic features – consisting of features like person, setting, size, social interaction and physical action. Each type of information is processed in a different brain system, which is how the brain also processes the information for objects. By measuring the activation in each brain system, the program can tell what types of thoughts are being contemplated.

 

mind reading technology

 

For seven adult participants, the researchers used a computational model to assess how the brain activation patterns for 239 sentences corresponded to the neurally plausible semantic features that characterised each sentence. The program was then able to decode the features of the 240th left-out sentence. They went through leaving out each of the 240 sentences in turn, in what is called cross-validation.

The model was able to predict the features of the left-out sentence, with 87% accuracy, despite never being exposed to its activation before. It was also able to work in the other direction, to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.

"Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence," Just said. "This advance makes it possible for the first time to decode thoughts containing several concepts. That's what most human thoughts are composed of."

He added, "A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding. We are on the way to making a map of all the types of knowledge in the brain." CMU's Jing Wang and Vladimir L. Cherkassky also participated in the study.

Discovering how the brain decodes complex thoughts is one of the many brain research breakthroughs to happen at Carnegie Mellon. CMU has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and is the birthplace of artificial intelligence and cognitive psychology. Building on its strengths in biology, computer science, psychology, statistics and engineering, CMU launched BrainHub, an initiative that focuses on how the structure and activity of the brain give rise to complex behaviours.

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

21st June 2017

Entangled photons sent between suborbital space and Earth

Chinese scientists report the transmission of entangled photons between suborbital space and Earth, using the satellite Micius. More satellites could follow in the near future, with plans for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.

 

 

In a landmark study, Chinese scientists report the successful transmission of entangled photons between suborbital space and Earth. Furthermore, whereas the previous record for entanglement distance was 100 km (62 miles), here, transmission over more than 1,200 km (746 miles) was achieved.

The distribution of quantum entanglement, especially across vast distances, holds major implications for quantum teleportation and encryption networks. Yet, efforts to entangle quantum particles – essentially "linking" them together over long distances – have been limited to 100 km or less, mainly because the entanglement is lost as they are transmitted along optical fibres, or through open space on land.

One way to overcome this issue is to break the line of transmission into smaller segments and repeatedly swap, purify and store quantum information along the optical fibre. Another approach to achieving global quantum networks is by making use of lasers and satellite technologies. Using a Chinese satellite called Micius, launched last year and equipped with specialised quantum tools, Juan Yin et al. demonstrated the latter feat. The Micius satellite was used to communicate with three ground stations across China, each up to 1,200 km apart.

The separation between the orbiting satellite and these ground stations varied from 500 to 2,000 km. A laser beam on the satellite was subjected to a beam splitter, which gave the beam two distinct polarised states. One of the spilt beams was used for transmission of entangled photons, while the other was used for photon receipt. In this way, entangled photons were received at the separate ground stations.

"It's a huge, major achievement," Thomas Jennewein, physicist at the University of Waterloo in Canada, told Science. "They started with this bold idea and managed to do it."

"The Chinese experiment is quite a remarkable technological achievement," said Artur Ekert, a professor of quantum physics at the University of Oxford, in an interview with Live Science. "When I proposed the entangled-based quantum key distribution back in 1991 when I was a student in Oxford, I did not expect it to be elevated to such heights."

One of the many challenges faced by the team was keeping the beams of photons focused precisely on the ground stations as the satellite hurtled through space at nearly 8 kilometres per second.

Quantum encryption, if successfully developed, could revolutionise communications. Information sent via this method would, in theory, be absolutely secure and practically impossible for hackers to intercept. If two people shared an encrypted quantum message, a third person would be unable to access it without changing the information in an unpredictable way. Further satellite tests are planned by China in the near future, with potential for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

20th June 2017

Graphene transistors could mean computers that are 1,000 times faster

Next-gen, carbon-based transistors would far outperform today's silicon versions, according to a new research paper from the University of Central Florida (UCF).

 

graphene transistors faster
All-carbon spin logic gate. Credit: Nature Communications (2017). DOI: 10.1038/ncomms15635

 

Traditional silicon-based transistors revolutionised electronics with their ability to switch current on and off. By controlling the flow of current, the creation of smaller computers and other devices was possible. Over the decades, rapid gains in miniaturisation led to computers shrinking from room-sized monoliths, to wardrobe-sized, to desktops and laptops and eventually handheld smartphones – a phenomenon known as Moore's Law. In recent years, however, concerns have arisen that the rate of progress may have slowed, or could even be approaching a fundamental limit.

A solution may be on the horizon. This month, researchers have theorised a next-generation transistor based not on silicon but on a ribbon of graphene, a two-dimensional carbon material with the thickness of a single atom. Their findings – reported in Nature Communications – could have big implications for electronics, computing speeds and big data in the future. Graphene-based transistors may someday lead to computers that are 1,000 times faster and use a hundredth of today's power.

"If you want to continue to push technology forward, we need faster computers to be able to run bigger and better simulations for climate science, for space exploration, for Wall Street. To get there, we can't rely on silicon transistors anymore," said Ryan M. Gelfand, director of the NanoBioPhotonics Laboratory at UCF.

 

ryan gelfand UCF
University of Central Florida Assistant Professor Ryan M. Gelfand

 

His team found that by applying a magnetic field to a graphene ribbon, they could change the resistance of current flowing through it. For this device, the magnetic field was controlled by increasing or decreasing the current through adjacent carbon nanotubes. The strength of the magnetic field matched the flow of current through this new kind of transistor, much like a valve controlling the flow of water through a pipe.

Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But clock speeds that rely on silicon transistors have been relatively stagnant for over a decade now, and are mostly still stuck in the 3 to 4 gigahertz range.

A cascading series of graphene transistor-based logic circuits could produce a massive jump, explains Gelfland, with clock speeds approaching the terahertz range – 1,000 times faster – because communication between each of the graphene nanoribbons would occur via electromagnetic waves, instead of the physical movement of electrons. They would also be smaller and far more efficient, allowing device-makers to shrink technology and squeeze in more functionality.

"The concept brings together an assortment of existing nanoscale technologies and combines them in a new way," said Dr. Joseph Friedman, assistant professor of electrical and computer engineering at UT Dallas, who collaborated with Gelfland and his team. While the concept is still in the early stages, Friedman said work towards a prototype all-carbon, cascaded spintronic computing system will continue in the NanoSpinCompute research laboratory.

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

19th June 2017

New VR headset will feature "human eye resolution"

Varjo, a tech startup based in Helsinki, Finland, has today unveiled a new VR/AR technology it has been developing in secret. This features nearly 70 times the pixel count of current generation headsets and is sufficient to match human eye resolution.

 

human eye resolution vr

 

Varjo ("Shadow" in Finnish) Technologies today announced it has emerged from stealth and is now demonstrating the world's first human eye-resolution headmounted display for upcoming Virtual Reality, Augmented Reality and Mixed Reality (VR/AR/MR) products. Designed for professional users and with graphics an order of magnitude beyond any currently shipping or announced head-mounted display, this major advancement will enable unprecedented levels of immersion and realism.

This breakthrough is accomplished by Varjo's patented technology that replicates how the human eye naturally works, creating a super-high-resolution image to the user's gaze direction. This is further combined with video-see-through (VST) technology for unparalleled AR/MR capabilities.

 

human eye resolution vr

 

Codenamed "20|20" after perfect vision, Varjo's prototype is based on unique technology created by a team of optical scientists, creatives and developers who formerly occupied top positions at Microsoft, Nokia, Intel, Nvidia and Rovio. It will be shipping in Varjo-branded products specifically for professional users and applications starting in late Q4, 2017.

"Varjo's patented display innovation pushes VR technology 10 years ahead of the current state-of-the-art, where people can experience unprecedented resolution of VR and AR content limited only by the perception of the human eye itself," said Urho Konttori, CEO and founder. "This technology – along with Varjo VST – jump-starts the immersive computing age overnight: VR is no longer a curiosity, but now can be a professional tool for all industries."

 

 

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

4th June 2017

Intel reveals Core i9 – the next generation of high-end processors

Chipmaker Intel has announced a new generation of processors, including the Core i9 series, its first teraflop desktop CPUs.

 

Intel computer technology future timeline

 

Intel has this week introduced a new family of microprocessors – the Core X-series – which the company describes as the most scalable, accessible and powerful desktop platform ever developed. This includes a new Core i9 brand and Core i9 Extreme Edition, the first consumer desktop CPU with 18 cores and 36 threads. The company is also launching the Intel X299, which adds even more I/O and overclocking capabilities.

Given their extreme power and speed, this family of processors is being pitched at gamers, content creators, and overclocking enthusiasts. Intel expects to increase its presence in high-end desktop markets and believes that customers will pay premiums in exchange for higher performance. Prices for the i9 line-up will range from $999 to $1999.

Prior to this announcement, Intel's high-end desktop processors (known as Broadwell-E) came with six, eight or 10 core options. The Core X-series will include five Core i9 chips, with a minimum of 10 cores and the top-end i9-7980 featuring a massive 18 cores. A major update has also been announced for Intel's Turbo Boost Max Technology 3.0, which will identify the two top cores and direct critical workloads to those, for a big jump on single or multithreaded performance.

The Core i9-7980 will be the first Intel consumer processor to exceed a teraflop of computing power, meaning it can perform a trillion computational operations every second. To put this in perspective, that is equal to the ASCI Red, which reigned as the world's most powerful supercomputer from 1997 until the year 2000. All Core i9 chips will have 3.3GHz base clock speeds, with up to 4.5GHz using Turbo Boost 3.0, and up to 44 PCIe lanes.

"The possibilities with this type of performance are endless," says Gregory Bryant, a senior vice president, in a blog post. "Content creators can have fast image rendering, video encoding, audio production and real-time preview – all running in parallel seamlessly so they spend less time waiting and more time creating. Gamers can play their favourite game, while they also stream, record and encode their gameplay, and share on social media – all while surrounded by multiple screens for a 12K experience with up to four discrete graphics cards."

In addition to Core i9, there are also three new i7 chips and an i5, including the quad-core i5-7640X and i7 models in 4, 6 and 8-core variants. Prices will range from $242 for the i5, to $599 for the i7-7820X.

 

intel computer technology future timeline

 

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

17th May 2017

World's largest single-memory computer is unveiled

Hewlett Packard Enterprise (HPE) has revealed "The Machine" – a new computing architecture with 160 terabytes of memory.

 

hpe the machine
Credit: HPE

 

Hewlett Packard Enterprise (HPE) has introduced the world's largest single-memory computer. Known simply as "The Machine", it is the largest R&D program in the history of the company, and is aimed at delivering a new paradigm called Memory-Driven Computing – an architecture custom-built for the big data era.

"The secrets to the next great scientific breakthrough, industry-changing innovation or life-altering technology hide in plain sight behind the mountains of data we create every day," explained Meg Whitman, CEO of HPE. "To realise this promise, we can't rely on the technologies of the past. We need a computer built for the big data era."

The prototype unveiled this week features a staggering 160 terabytes (TB) of memory, enough to simultaneously work with the data held in every book in the Library of Congress five times over – or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size within a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing.

Based on the current prototype, HPE expects the architecture could easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes. For context, that is 250,000 times the entire digital universe today.

With such a vast amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google's autonomous vehicles and every data set from space exploration, all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds.

 

hpe the machine memory driven computing future technology timeline

 

"We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society," said Mark Potter, CTO at HPE and director, Hewlett Packard Labs. "The architecture we have unveiled can be applied to every computing category from intelligent edge devices to supercomputers."

Memory-Driven Computing puts memory, not the processor, at the centre of the computing architecture. By eliminating the inefficiencies of how memory, storage and processors interact in traditional systems today, Memory-Driven Computing reduces the time needed to process complex problems from days to hours, hours to minutes, minutes to seconds, to deliver real-time intelligence.

The current prototype Machine has its memory spread across 40 physical nodes, each interconnected using a high-performance fabric protocol, with an optimised Linux-based operating system (OS) running on ThunderX2. Photonics/optical communication links, including the new X1 photonics module, are online and operational. Software programming tools are designed to take full advantage of the abundant persistent memory.

"We think that this is a game-changer," said Kirk Bresniker, Chief Architect at HPE. "This will be the overarching arc for the next 10, 20, 30 years."

 

 

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

2nd May 2017

A neurotech future will require new human rights laws

New human rights laws are needed to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk, according to a paper from the Institute for Biomedical Ethics in Switzerland.

 

neurotechnology future timeline

 

New human rights laws to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk have been proposed in the open access journal Life Sciences, Society and Policy. The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are:

1. The right to cognitive liberty
2. The right to mental privacy
3. The right to mental integrity, and
4. The right to psychological continuity.

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: "The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology."

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

 

 

 

Professor Roberto Andorno, co-author of the research, explained: "Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court; for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for 'neuromarketing', to understand consumer behaviour and elicit desired responses from customers. There are also tools such as 'brain decoders' which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom, which we sought to address with the development of four new human rights laws."

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to 'eavesdrop' on someone's mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualisation of human rights laws and even the creation of new ones.

Marcello Ienca added: "Science fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

8th April 2017

Major breakthrough in smart printed electronics

For the first time, researchers have fabricated printed transistors consisting entirely of two-dimensional nanomaterials.

 

smart printed electronics future timeline
Credit: AMBER, Trinity College Dublin

 

Scientists from Advanced Materials and BioEngineering Research (AMBER) at Trinity College, Dublin, have fabricated printed transistors consisting entirely of 2-D nanomaterials for the first time. These materials combine new electronic properties with the potential for low-cost production.

This breakthrough could enable a range of new, futuristic applications – such as food packaging that displays a digital countdown to warn of spoiling, labels that alert you when your wine is at its optimum temperature, or even a window pane that shows the day's forecast. The AMBER team's findings were published yesterday in the leading journal Science.

This discovery opens the path for industry, such as ICT and pharmaceutical firms, to cheaply print a host of electronic devices, from solar cells to LEDs, with applications from interactive smart food and drug labels, to next-generation banknote security and e-passports.

Prof. Jonathan Coleman, an investigator in AMBER and Trinity's School of Physics, commented: "In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging."

 

 

A scene from Steven Spielberg's 2002 sci-fi thriller, Minority Report.

 

"Printed electronic circuitry (made from the devices we have created) will allow consumer products to gather, process, display and transmit information – for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date," he continued. "We believe that 2-D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2-D nanomaterials have the capability to yield more cost effective and higher performance printed devices.

"However, while the last decade has underlined the potential of 2-D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important, because it shows that conducting, semiconducting and insulating 2-D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors, as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2-D nanosheets."

Led by Prof. Coleman, in collaboration with the groups of Prof. Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft, Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor), to form an all-printed, all-nanosheet, working transistor.

 

smart printed electronics future timeline
Credit: AMBER, Trinity College Dublin

 

Printable electronics have developed over the last 30 years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2-D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2-D nanomaterials is based on Prof. Coleman's scalable method of producing 2-D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick, but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2-D materials in a form that is easy to process into inks. Prof. Coleman's publication provides the potential to print circuitry at extremely low cost, which will facilitate a wide range of applications from animated posters to smart labels.

Prof. Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

10th March 2017

IBM unveils roadmap for quantum computers

IBM has announced "IBM Q", an initiative to build commercially available universal quantum computing systems.

 

IBM quantum computer system
Credit: IBM Research

 

IBM has announced an industry-first initiative to build commercially available universal quantum computing systems. “IBM Q” systems and services will be delivered via the IBM Cloud platform. Current technologies that run on classical computers, such as Watson, can help to identify patterns and insights buried in vast amounts of existing data. By contrast, quantum computers will deliver solutions to important problems where patterns cannot be seen because the data doesn’t exist and the calculations needed to answer questions are too enormous to ever be processed by classical computers.

IBM is also launching a new Application Program Interface (API) for the “IBM Quantum Experience” enabling anyone with an Internet connection to use the quantum processor (via the Cloud) for running algorithms and experiments, working with individual quantum bits, and exploring tutorials and simulations of what might be possible with quantum computing. In the first half of 2017, IBM plans to release a full Software Development Kit (SDK) for users to build simple quantum applications and software programs.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries.”

 

IBM quantum computer system qubit
Credit: IBM Research

 

IBM intends to build IBM Q systems to expand the application domain of quantum computing. A key metric will be the power of a quantum computer expressed by the “Quantum Volume” – which includes the number of qubits, quality of operations, connectivity and parallelism. As a first step to increase Quantum Volume, IBM aims to build commercial IBM Q systems with around 50 qubits in the next few years to demonstrate capabilities beyond today’s classical systems, and plans to collaborate with key industry partners to develop applications that exploit the quantum speedup of the systems.

IBM Q systems will be designed to tackle problems that are currently too complex and exponential in nature for classical computing systems to handle. One of the first and most promising applications will be in the area of chemistry. Even for simple molecules like caffeine, the number of quantum states in the molecule can be astoundingly large; so complex that all the conventional computing memory and processing power scientists could ever build could not handle the problem.

IBM’s scientists have recently developed new techniques to efficiently explore the simulation of chemistry problems on quantum processors and experimental demonstrations of various molecules are in progress. In the future, the goal will be to scale to even more complex molecules and try to predict chemical properties with higher precision than possible with classical computers.

Future applications of quantum computing may include:

Artificial Intelligence: Making facets of artificial intelligence such as machine learning much more powerful when data sets can be too big such as searching images or video
Cloud Security: Making cloud computing more secure by using the laws of quantum physics to enhance private data safety
Drug & Materials Discovery: Untangling the complexity of molecular and chemical interactions leading to the discovery of new medicines and materials
Financial Services: Finding new ways to model financial data and isolating key global risk factors to make better investments
Supply Chain & Logistics: Finding the optimal path across global systems of systems for ultra-efficient logistics and supply chains, such as optimising fleet operations for deliveries during the holiday season

 

IBM quantum computer system qubit

 

“Classical computers are extraordinarily powerful and will continue to advance and underpin everything we do in business and society,” said Tom Rosamilia, senior vice president of IBM Systems. “But there are many problems that will never be penetrated by a classical computer. To create knowledge from much greater depths of complexity, we need a quantum computer. We envision IBM Q systems working in concert with our portfolio of classical high-performance systems to address problems that are currently unsolvable, but hold tremendous untapped value.”

IBM’s roadmap for scaling to practical quantum computers is based on a holistic approach to advancing all parts of the system. The company will leverage its deep expertise in superconducting qubits, complex high performance system integration, and scalable nanofabrication processes from the semiconductor industry to help advance the quantum mechanical capabilities. The developed software tools and environment will also leverage IBM’s world-class mathematicians, computer scientists, and software and system engineers.

"As Richard Feynman said in 1981, ‘…if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.’ This breakthrough technology has the potential to achieve transformational advancements in basic science, materials development, environmental and energy research, which are central to the missions of the Department of Energy (DOE),” said Steve Binkley, deputy director of science, US Department of Energy. “The DOE National Labs have always been at the forefront of new innovation, and we look forward to working with IBM to explore applications of their new quantum systems."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

7th February 2017

New technology could triple sharpness of displays

Researchers have developed a new blue-phase liquid crystal that could triple the sharpness of TVs, computer screens, and other displays while also reducing the power needed to run the device.

 

future display technology

 

An international team of researchers has developed a new blue-phase liquid crystal that could enable televisions, computer screens and other displays that pack more pixels into the same space while also reducing the power needed to run the device. The new liquid crystal is optimised for field-sequential colour liquid crystal displays (LCDs), a promising technology for next-generation displays.

"Today's Apple Retina displays have a resolution density of about 500 pixels per inch," said Shin-Tson Wu, who led the research team at the University of Central Florida's College of Optics and Photonics (CREOL). "With our new technology, a resolution density of 1500 pixels per inch could be achieved on the same sized screen. This is especially attractive for virtual reality headsets or augmented reality technology, which must achieve high resolution in a small screen to look sharp when placed close to our eyes."

Although the first blue-phase LCD prototype was demonstrated by Samsung in 2008, the technology still hasn't moved into production, because of problems with high operation voltage and slow capacitor charging time. To tackle these problems, Wu's research team worked with collaborators from liquid crystal manufacturer JNC Petrochemical Corporation in Japan and display manufacturer AU Optronics Corporation in Taiwan.

In the journal Optical Materials Express, the team explains how combining the new liquid crystal with a special performance-enhancing electrode structure can achieve light transmittance of 74 percent, with 15 volts per pixel – operational levels that could finally be practical for commercial applications.

"Field-sequential colour displays can be used to achieve the smaller pixels needed to increase resolution density," explains Yuge Huang, first author of the paper. "This is important, because the resolution density of today's technology is almost at its limit."

 

screen closeup

 

Today's LCD screens contain a thin layer of nematic liquid crystal through which the incoming white LED backlight is modulated. Thin-film transistors deliver the required voltage that controls light transmission in each pixel. The LCD subpixels contain red, green and blue filters that are used in combination to produce different colours to the human eye. The colour white is created by combining all three colours.

Blue-phase liquid crystal can be switched, or controlled, about 10 times faster than the nematic type. This sub-millisecond response time allows each LED colour (red, green and blue) to be sent through the liquid crystal at different times and eliminates the need for colour filters. The LED colours are switched so quickly that our eyes can integrate red, green and blue to form white.

"With colour filters, the red, green and blue light are all generated at the same time," said Wu. "However, with blue-phase liquid crystal, we can use one subpixel to make all three colours – but at different times. This converts space into time, a space-saving configuration of two-thirds, which triples the resolution density."

The blue-phase liquid crystal also triples the optical efficiency because the light doesn't have to pass through colour filters, which limit transmittance to about 30 percent. Another big advantage is that the displayed colour is more vivid because it comes directly from red, green and blue LEDs, which eliminates the colour crosstalk that occurs with conventional filters.

Wu's team worked with JNC to reduce the blue-phase liquid crystal's dielectric constant to a minimally acceptable range, to reduce the transistor charging time and get submillisecond optical response time. However, each pixel still needed slightly higher voltage than a single transistor could provide. To overcome this problem, the researchers implemented a protruded electrode structure that lets the electric field penetrate the liquid crystal more deeply. This lowered the voltage needed to drive each pixel while maintaining a high light transmittance.

"We achieved an operational voltage low enough to allow each pixel to be driven by a single transistor while also achieving a response time of less than a millisecond," said Haiwei Chen, a doctoral student in Wu's lab. "This delicate balance between operational voltage and response time is key for enabling field sequential colour displays."

Wu predicts that a working prototype could be available in the next year.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 
     
       
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed

Privacy Policy