The FDA has approved "Abilify MyCite" – the first drug in the U.S. with a digital ingestion tracking system. This can record when the medication was taken, via a sensor embedded in the pill.
Credit: Proteus Digital Health
The U.S. Food and Drug Administration (FDA) has this week approved the first drug in the U.S. with a digital ingestion tracking system. Abilify MyCite features an ingestible sensor embedded in the pill that records when medication is taken. The product is approved for the treatment of schizophrenia, acute treatment of manic and mixed episodes associated with bipolar I disorder and for use as an add-on treatment for depression in adults.
The system works by sending a message from the pill's sensor to a wearable patch. The patch transmits the information to a mobile application so that patients can track the ingestion of medication on their smartphone. The patch also records activity levels, sleeping patterns, steps taken, activity and heart rate. Patients can permit their doctor and up to four other people including family members to access the data through a web-based portal.
The sensor itself is made of silicon, copper and magnesium. It produces an electrical signal that is activated upon contact with stomach acid; the sensor then passes through the body naturally.
"Being able to track ingestion of medications prescribed for mental illness may be useful for some patients," said Mitchell Mathis, M.D., a director at the FDA's Center for Drug Evaluation and Research. "The FDA supports the development and use of new technology in prescription drugs and is committed to working with companies to understand how technology might benefit patients and prescribers."
Credit: Proteus Digital Health
Abilify MyCite was developed in a joint collaboration between Japanese pharmaceutical company Otsuka (which makes the oral aripiprazole tablets) and California-based Proteus (which created the sensor). The pill could help to reduce the problem of non-adherence to prescriptions, which is estimated to cost $100bn in the U.S. each year. It will be particularly useful for elderly people with failing memories to ensure they are taking their drugs properly.
"The approval of Abilify MyCite – the first digital medicine system – means that for the first time in my years of experience as a psychiatrist, there is an innovative way to provide individuals with serious mental illness, and selected members of their families and care teams, with information on objective medication taking patterns to help inform the patient's illness management and personalised treatment plan. This allows the opportunity for an open dialogue with the patient," said John Kane, MD, Vice President for Behavioural Health Services at Northwell Health, New York. "Until now, pharmacologic therapy for serious mental illness has been missing a systematic approach to objectively track and signal that a patient has taken their drug."
"The time is right for the category of Digital Medicines to be available to patients with serious mental illness," said Andrew Thompson, CEO of Proteus. "Consumers already manage important tasks like banking, shopping, and communicating with friends and family by using their smartphones, as they go about their daily lives. With this FDA approval, we can help enable individuals with serious mental illness to engage with their care team about their treatment plan in a new way."
Proteus raised around $400 million from investors to bring its sensor to commercial use. Otsuka has not yet revealed a price for Abilify MyCite, which will be rolled out during 2018, initially to a limited number of health plans. This approval is likely to result in many more "digital pills" for other conditions besides mental health. The FDA is planning to hire more staff with a "deep understanding" of software development in relation to medical devices and engage with entrepreneurs on new guidelines.
Qualcomm has announced the first 5G mobile connection, with a data transfer speed of more than 1 Gbps.
Qualcomm Technologies Inc. has successfully achieved a 5G data connection on a 5G modem chipset for mobile devices. The Qualcomm Snapdragon X50 5G modem chipset delivered gigabit speeds and a data connection in the 28GHz mmWave radio frequency band, demonstrating the next generation of cellular technology for business and consumers. Additionally, Qualcomm previewed its first 5G smartphone reference design for the testing and optimisation of 5G within the power and form-factor constraints of a handheld phone.
"Achieving the world's first announced 5G data connection is truly a testament to Qualcomm's leadership and extensive expertise in mobile connectivity," said Cristiano Amon, executive vice president of Qualcomm Technologies, Inc. "This major milestone and our 5G smartphone reference design showcase how Qualcomm Technologies is driving 5G NR in mobile devices to enhance mobile broadband experiences for consumers around the world."
The 5G demonstration took place in Qualcomm Technologies' laboratories in San Diego. 5G NR mmWave is a new frontier for mobile, now made possible through the 5G NR standard, and is expected to usher in the next generation of user experiences and to significantly increase network capacity. It will support the emerging "Internet of Things" (IoT), providing widespread automation and connectivity of devices, systems and services.
Qualcomm has been instrumental in accelerating the commercialisation of 5G NR, through many key contributions – including foundational research and inventions, standards-setting in 3GPP, designing sub-6 GHz and mmWave 5G NR prototype systems, interoperability and over-the-air trials with major global operators and infrastructure vendors, and developing integrated circuit products for mobile devices. The Snapdragon X50 5G NR modem family is expected to support commercial launches of 5G smartphones and networks in the first half of 2019.
Researchers from Oxford, Münster and Exeter Universities have created photonic computer chips – that use light rather than electricity – to imitate the way a brain's synapses operate.
A photonic synapse in a neuron network. Credit: Harish Bhaskaran
Scientists have made a crucial step towards unlocking the “holy grail” of computing – photonic microchips that mimic the way the human brain works to store and process information. The work, by researchers from Oxford, Münster and Exeter Universities, combined phase-change materials – found in common household items such as re-writable optical discs – with specially designed circuits, to deliver a biological-like synaptic response.
Crucially, their photonic synapses can operate at speeds 1,000 times faster than those of the human brain. The team believe that the research could pave the way for a new age of computing, where machines work and think in a similar way to the human brain, while at the same time exploiting the speed and power efficiency of photonic systems.
“The development of computers that work more like the human brain has been a holy grail of scientists for decades,” said Professor Harish Bhaskaran from Oxford University, who led the team. “Via a network of neurons and synapses, the brain can process and store vast amounts of information simultaneously, using only a few tens of Watts of power. Conventional computers can’t come close to this sort of performance.”
Schematic of a photonic synapse mimicking the biological synapse connecting neurons. Credit: Harish Bhaskaran
Professor C David Wright, co-author from the University of Exeter, also explained: “Electronic computers are relatively slow, and the faster we make them the more power they consume. Conventional computers are also pretty ‘dumb’, with none of the in-built learning and parallel processing capabilities of the brain. We tackle both of these issues here – not only by developing new brain-like computer architectures, but also by working in the optical domain to leverage the huge speed and power advantages of the upcoming silicon photonics revolution.”
Professor Wolfram Pernice, a co-author of the paper from the University of Münster added: “Since synapses outnumber neurons in the brain by around 10,000 to 1, any brain-like computer needs to be able to replicate some form of synaptic mimic. That is what we have done here.”
A paper – On-chip photonic synapse – was published yesterday in Science Advances.
SanDisk has just unveiled a 400 gigabyte (GB) microSD card, which it claims is the world's highest capacity.
SanDisk, a subsidiary of Western Digital Corporation, has announced the launch of its 400GB Ultra microSDXC UHS-I card, which it claims to be the world's highest-capacity microSD card. Just two years after introducing its record-breaking 200GB microSD card, SanDisk has doubled the capacity within the same tiny form factor.
"Mobile devices have become the epicentre of our lives, and consumers are now accustomed to using their smartphones for anything from entertainment to business. We are collecting and sharing massive amounts of data on smartphones, drones, tablets, PCs, laptops and more. We anticipate that storage needs will only continue to grow as people continue to expect more sophisticated features on their devices and desire higher quality content," said Jeff Janukowicz, vice president of research. "We estimate that mobile device users worldwide will install over 150 billion applications alone this year, which require a ton of memory on all of our favourite devices."
SanDisk has achieved this capacity breakthrough by leveraging its proprietary memory technology and design and production processes that allow for more bits per die.
"We continue to push technology boundaries and transform the way consumers use their mobile devices," said Sven Rathjen, vice president of product marketing at Western Digital. "By focusing on achieving new technology milestones, we enable consumers to keep up with their mobile-centric lifestyles with storage solutions they trust."
The new card holds up to 40 hours of Full HD video and offers transfer speeds of up to 100MB/s, meaning it can move up to 1,200 photos per minute. Additionally, it meets the A1 App Performance Class specification, which means that the card can load apps faster.
Storage capacities of SD and microSD cards are an excellent example of the exponential growth seen in many forms of information technology in recent years and decades. On current trends, it appears likely that the first 1TB microSD card will arrive by around 2020-21.
Researchers at the University of Manchester have shown that magnetic hysteresis is possible in individual molecules at -213°C. This proves that storing data with single-molecule magnets is more feasible than previously thought, and could theoretically give 100 times higher density than current technologies.
From smartphones to supercomputers, the growing need for smaller and more energy efficient devices has made higher density data storage one of the most important technological quests. Now, scientists at the University of Manchester in England have proved that storing data with a class of molecules known as single-molecule magnets is more feasible than previously thought.
The research, led by Dr David Mills and Dr Nicholas Chilton, is published in Nature. It shows that magnetic hysteresis, a memory effect that is a prerequisite of any data storage, is possible in individual molecules at -213°C. This is tantalisingly close to the temperature of liquid nitrogen (-196°C).
The result means that data storage with single molecules could become a reality because the data servers could be cooled using relatively cheap liquid nitrogen at -196°C instead of the far more expensive liquid helium (-269°C). This research provides proof-of-concept that such technologies could be achievable in the near future.
The potential for molecular data storage is huge. To put this into a consumer context, molecular technologies could store 25,000 GB of information in a space the size of a 50p coin, compared to Apple's latest iPhone 7 with a maximum storage of 256 GB.
Single-molecule magnets display a magnetic memory effect that is a requirement of any data storage and molecules containing lanthanide atoms have exhibited this phenomenon at the highest temperatures to date. Lanthanides are rare earth metals used in all forms of everyday electronic devices such as smartphones, tablets and laptops. The team achieved their results using the lanthanide element dysprosium.
"This is very exciting, as magnetic hysteresis in single molecules implies the ability for binary data storage. Using single molecules for data storage could theoretically give 100 times higher data density than current technologies," says Chilton. "Here we are approaching the temperature of liquid nitrogen, which would mean data storage in single molecules becomes much more viable from an economic point of view."
The practical applications of molecular-level data storage could lead to much smaller hard drives that require less energy, meaning data centres across the globe could become a lot more energy efficient.
For example, Google currently has 15 data centres around the world. They process an average of 40 million searches per second, resulting in 3.5 billion searches per day and 1.2 trillion searches per year. To deal with all that data, in July last year, it was reported that Google had approximately 2.5 million servers in each data centre and that number was likely to rise.
Some reports say the energy consumed at such centres could account for as much as 2% of the world's total greenhouse gas emissions. This means any improvement in data storage and energy efficiency could also have significant benefits for the environment as well as vastly increasing the amount of information that can be stored.
Dr Mills adds: "This advance eclipses the previous record which stood at -259 °C, and took almost 20 years of research effort to reach. We are now focused on the preparation of new molecules inspired by the design in this paper. Our aim is to achieve even higher operating temperatures in the future, ideally functioning above liquid nitrogen temperatures."
Researchers at Brown University report the transmission of data through a terahertz multiplexer at 50 gigabits per second, which could lead to a new generation of ultra-fast Wi-Fi.
Credit: Mittleman lab / Brown University
Multiplexing, the ability to send multiple signals through a single channel, is a fundamental feature of any voice or data communication system. A team of researchers has now demonstrated a method for multiplexing data carried on terahertz waves – high-frequency radiation that could enable the next generation of ultra-high bandwidth wireless networks.
Writing in the journal Nature Communications, they describe the transmission of two real-time video signals through a terahertz multiplexer at an aggregate data rate of 50 gigabits per second, 100 times the optimal data rate of today's fastest cellular network.
"We showed that we can transmit separate data streams on terahertz waves at very high speeds and with very low error rates," said Daniel Mittleman, professor in Brown's University's School of Engineering and the paper's corresponding author. "This is the first time anybody has characterised a terahertz multiplexing system using actual data, and our results show that our approach could be viable in future terahertz wireless networks."
Current voice and data networks use microwaves to carry signals wirelessly. But like most forms of information technology, the demand for data transmission is growing exponentially, and quickly becoming more than microwave networks can handle. Terahertz waves have higher frequencies than microwaves and therefore a much larger capacity to carry data. However, scientists have only just begun experimenting with terahertz frequencies and many of the basic components needed for such communication don't exist yet.
A system for multiplexing and demultiplexing (also known as mux/demux) is one of those basic components. It's a technology that allows one cable to carry multiple TV channels, or hundreds of users to access a Wi-Fi network.
The mux/demux approach Mittleman and his colleagues developed uses two metal plates placed parallel to each other to form a waveguide, as shown in the illustration below. One plate has a slit cut into it. When a terahertz wave travels through the waveguide, some of the radiation leaks out of the slit. The angle at which radiation beams escape is dependent upon the frequency of the wave.
"We can put several waves at several different frequencies – each of them carrying a data stream – into the waveguide, and they won't interfere with each other because they're different frequencies; that's multiplexing," Mittleman said. "Each of those frequencies leaks out of the slit at a different angle, separating the data streams; that's demultiplexing."
Due to the nature of terahertz waves, signals in terahertz communications networks will propagate as directional beams, not omnidirectional broadcasts like in existing wireless systems. This directional relationship between propagation angle and frequency is key to enabling mux/demux in terahertz systems. A user at a particular location (and therefore at a particular angle from the multiplexing system) will communicate on a particular frequency.
Credit: Mittleman lab / Brown University
In 2015, Mittleman's team first published a paper describing their waveguide concept. For that initial work, they used a broadband terahertz light source to confirm that different frequencies did indeed emerge from the device at different angles. While that was an effective proof of concept, this latest work took the critical step of testing the device with real data.
The team encoded two high-definition television broadcasts, beamed together into the multiplexer system. Their experiments showed that transmissions were error-free up to 10 gigabits per second, which is much faster than today's standard Wi-Fi speeds. Error rates increased somewhat when the speed was boosted to 50 gigabits per second (25 gigabits per channel), but were still well within the range that can be fixed using forward error correction, which is commonly used in today's communications networks.
The researchers plan to continue developing this and other terahertz components. Mittleman recently received a license from the FCC to perform outdoor tests at terahertz frequencies on the Brown University campus.
"We think that we have the highest-frequency license currently issued by the FCC, and we hope it's a sign that the agency is starting to think seriously about terahertz communication," he said. "Companies are going to be reluctant to develop terahertz technologies until there's a serious effort by regulators to allocate frequency bands for specific uses, so this is a step in the right direction."
New research builds on the pioneering use of machine learning algorithms with brain imaging technology to "mind read." For the first time, thoughts containing several concepts can be decoded.
Carnegie Mellon University scientists can now use brain activation patterns to identify complex thoughts, such as, "The witness shouted during the trial."
This latest research, led by CMU's Marcel Just, builds on the pioneering use of machine learning algorithms with brain imaging technology to "mind read." The findings indicate that the mind's building blocks for constructing complex thoughts are formed by the brain's various sub-systems and are not word-based. Published in Human Brain Mapping and funded by the Intelligence Advanced Research Projects Activity (IARPA), this study offers new evidence that the neural dimensions of concept representation are universal across people and languages.
"One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of 'bananas,' but 'I like to eat bananas in evening with my friends,'" said Just, a Professor of Psychology in the Dietrich College of Humanities and Social Sciences. "We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of."
Previous work by Just and his team showed that thoughts of familiar objects, like bananas or hammers, evoke activation patterns that involve the neural systems we use to deal with those objects. For example, how you interact with a banana involves how you hold it, how you bite it and what it looks like.
The new study demonstrates that the brain's coding of 240 complex events, sentences like the shouting during the trial scenario uses an alphabet of 42 meaning components, or neurally plausible semantic features – consisting of features like person, setting, size, social interaction and physical action. Each type of information is processed in a different brain system, which is how the brain also processes the information for objects. By measuring the activation in each brain system, the program can tell what types of thoughts are being contemplated.
For seven adult participants, the researchers used a computational model to assess how the brain activation patterns for 239 sentences corresponded to the neurally plausible semantic features that characterised each sentence. The program was then able to decode the features of the 240th left-out sentence. They went through leaving out each of the 240 sentences in turn, in what is called cross-validation.
The model was able to predict the features of the left-out sentence, with 87% accuracy, despite never being exposed to its activation before. It was also able to work in the other direction, to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.
"Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence," Just said. "This advance makes it possible for the first time to decode thoughts containing several concepts. That's what most human thoughts are composed of."
He added, "A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding. We are on the way to making a map of all the types of knowledge in the brain." CMU's Jing Wang and Vladimir L. Cherkassky also participated in the study.
Discovering how the brain decodes complex thoughts is one of the many brain research breakthroughs to happen at Carnegie Mellon. CMU has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and is the birthplace of artificial intelligence and cognitive psychology. Building on its strengths in biology, computer science, psychology, statistics and engineering, CMU launched BrainHub, an initiative that focuses on how the structure and activity of the brain give rise to complex behaviours.
Chinese scientists report the transmission of entangled photons between suborbital space and Earth, using the satellite Micius. More satellites could follow in the near future, with plans for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.
In a landmark study, Chinese scientists report the successful transmission of entangled photons between suborbital space and Earth. Furthermore, whereas the previous record for entanglement distance was 100 km (62 miles), here, transmission over more than 1,200 km (746 miles) was achieved.
The distribution of quantum entanglement, especially across vast distances, holds major implications for quantum teleportation and encryption networks. Yet, efforts to entangle quantum particles – essentially "linking" them together over long distances – have been limited to 100 km or less, mainly because the entanglement is lost as they are transmitted along optical fibres, or through open space on land.
One way to overcome this issue is to break the line of transmission into smaller segments and repeatedly swap, purify and store quantum information along the optical fibre. Another approach to achieving global quantum networks is by making use of lasers and satellite technologies. Using a Chinese satellite called Micius, launched last year and equipped with specialised quantum tools, Juan Yin et al. demonstrated the latter feat. The Micius satellite was used to communicate with three ground stations across China, each up to 1,200 km apart.
The separation between the orbiting satellite and these ground stations varied from 500 to 2,000 km. A laser beam on the satellite was subjected to a beam splitter, which gave the beam two distinct polarised states. One of the spilt beams was used for transmission of entangled photons, while the other was used for photon receipt. In this way, entangled photons were received at the separate ground stations.
"It's a huge, major achievement," Thomas Jennewein, physicist at the University of Waterloo in Canada, told Science. "They started with this bold idea and managed to do it."
"The Chinese experiment is quite a remarkable technological achievement," said Artur Ekert, a professor of quantum physics at the University of Oxford, in an interview with Live Science. "When I proposed the entangled-based quantum key distribution back in 1991 when I was a student in Oxford, I did not expect it to be elevated to such heights."
One of the many challenges faced by the team was keeping the beams of photons focused precisely on the ground stations as the satellite hurtled through space at nearly 8 kilometres per second.
Quantum encryption, if successfully developed, could revolutionise communications. Information sent via this method would, in theory, be absolutely secure and practically impossible for hackers to intercept. If two people shared an encrypted quantum message, a third person would be unable to access it without changing the information in an unpredictable way. Further satellite tests are planned by China in the near future, with potential for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.
Traditional silicon-based transistors revolutionised electronics with their ability to switch current on and off. By controlling the flow of current, the creation of smaller computers and other devices was possible. Over the decades, rapid gains in miniaturisation led to computers shrinking from room-sized monoliths, to wardrobe-sized, to desktops and laptops and eventually handheld smartphones – a phenomenon known as Moore's Law. In recent years, however, concerns have arisen that the rate of progress may have slowed, or could even be approaching a fundamental limit.
A solution may be on the horizon. This month, researchers have theorised a next-generation transistor based not on silicon but on a ribbon of graphene, a two-dimensional carbon material with the thickness of a single atom. Their findings – reported in Nature Communications – could have big implications for electronics, computing speeds and big data in the future. Graphene-based transistors may someday lead to computers that are 1,000 times faster and use a hundredth of today's power.
"If you want to continue to push technology forward, we need faster computers to be able to run bigger and better simulations for climate science, for space exploration, for Wall Street. To get there, we can't rely on silicon transistors anymore," said Ryan M. Gelfand, director of the NanoBioPhotonics Laboratory at UCF.
University of Central Florida Assistant Professor Ryan M. Gelfand
His team found that by applying a magnetic field to a graphene ribbon, they could change the resistance of current flowing through it. For this device, the magnetic field was controlled by increasing or decreasing the current through adjacent carbon nanotubes. The strength of the magnetic field matched the flow of current through this new kind of transistor, much like a valve controlling the flow of water through a pipe.
Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But clock speeds that rely on silicon transistors have been relatively stagnant for over a decade now, and are mostly still stuck in the 3 to 4 gigahertz range.
A cascading series of graphene transistor-based logic circuits could produce a massive jump, explains Gelfland, with clock speeds approaching the terahertz range – 1,000 times faster – because communication between each of the graphene nanoribbons would occur via electromagnetic waves, instead of the physical movement of electrons. They would also be smaller and far more efficient, allowing device-makers to shrink technology and squeeze in more functionality.
"The concept brings together an assortment of existing nanoscale technologies and combines them in a new way," said Dr. Joseph Friedman, assistant professor of electrical and computer engineering at UT Dallas, who collaborated with Gelfland and his team. While the concept is still in the early stages, Friedman said work towards a prototype all-carbon, cascaded spintronic computing system will continue in the NanoSpinCompute research laboratory.
Varjo, a tech startup based in Helsinki, Finland, has today unveiled a new VR/AR technology it has been developing in secret. This features nearly 70 times the pixel count of current generation headsets and is sufficient to match human eye resolution.
Varjo ("Shadow" in Finnish) Technologies today announced it has emerged from stealth and is now demonstrating the world's first human eye-resolution headmounted display for upcoming Virtual Reality, Augmented Reality and Mixed Reality (VR/AR/MR) products. Designed for professional users and with graphics an order of magnitude beyond any currently shipping or announced head-mounted display, this major advancement will enable unprecedented levels of immersion and realism.
This breakthrough is accomplished by Varjo's patented technology that replicates how the human eye naturally works, creating a super-high-resolution image to the user's gaze direction. This is further combined with video-see-through (VST) technology for unparalleled AR/MR capabilities.
Codenamed "20|20" after perfect vision, Varjo's prototype is based on unique technology created by a team of optical scientists, creatives and developers who formerly occupied top positions at Microsoft, Nokia, Intel, Nvidia and Rovio. It will be shipping in Varjo-branded products specifically for professional users and applications starting in late Q4, 2017.
"Varjo's patented display innovation pushes VR technology 10 years ahead of the current state-of-the-art, where people can experience unprecedented resolution of VR and AR content limited only by the perception of the human eye itself," said Urho Konttori, CEO and founder. "This technology – along with Varjo VST – jump-starts the immersive computing age overnight: VR is no longer a curiosity, but now can be a professional tool for all industries."
Chipmaker Intel has announced a new generation of processors, including the Core i9 series, its first teraflop desktop CPUs.
Intel has this week introduced a new family of microprocessors – the Core X-series – which the company describes as the most scalable, accessible and powerful desktop platform ever developed. This includes a new Core i9 brand and Core i9 Extreme Edition, the first consumer desktop CPU with 18 cores and 36 threads. The company is also launching the Intel X299, which adds even more I/O and overclocking capabilities.
Given their extreme power and speed, this family of processors is being pitched at gamers, content creators, and overclocking enthusiasts. Intel expects to increase its presence in high-end desktop markets and believes that customers will pay premiums in exchange for higher performance. Prices for the i9 line-up will range from $999 to $1999.
Prior to this announcement, Intel's high-end desktop processors (known as Broadwell-E) came with six, eight or 10 core options. The Core X-series will include five Core i9 chips, with a minimum of 10 cores and the top-end i9-7980 featuring a massive 18 cores. A major update has also been announced for Intel's Turbo Boost Max Technology 3.0, which will identify the two top cores and direct critical workloads to those, for a big jump on single or multithreaded performance.
The Core i9-7980 will be the first Intel consumer processor to exceed a teraflop of computing power, meaning it can perform a trillion computational operations every second. To put this in perspective, that is equal to the ASCI Red, which reigned as the world's most powerful supercomputer from 1997 until the year 2000. All Core i9 chips will have 3.3GHz base clock speeds, with up to 4.5GHz using Turbo Boost 3.0, and up to 44 PCIe lanes.
"The possibilities with this type of performance are endless," says Gregory Bryant, a senior vice president, in a blog post. "Content creators can have fast image rendering, video encoding, audio production and real-time preview – all running in parallel seamlessly so they spend less time waiting and more time creating. Gamers can play their favourite game, while they also stream, record and encode their gameplay, and share on social media – all while surrounded by multiple screens for a 12K experience with up to four discrete graphics cards."
In addition to Core i9, there are also three new i7 chips and an i5, including the quad-core i5-7640X and i7 models in 4, 6 and 8-core variants. Prices will range from $242 for the i5, to $599 for the i7-7820X.
Hewlett Packard Enterprise (HPE) has revealed "The Machine" – a new computing architecture with 160 terabytes of memory.
Hewlett Packard Enterprise (HPE) has introduced the world's largest single-memory computer. Known simply as "The Machine", it is the largest R&D program in the history of the company, and is aimed at delivering a new paradigm called Memory-Driven Computing – an architecture custom-built for the big data era.
"The secrets to the next great scientific breakthrough, industry-changing innovation or life-altering technology hide in plain sight behind the mountains of data we create every day," explained Meg Whitman, CEO of HPE. "To realise this promise, we can't rely on the technologies of the past. We need a computer built for the big data era."
The prototype unveiled this week features a staggering 160 terabytes (TB) of memory, enough to simultaneously work with the data held in every book in the Library of Congress five times over – or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size within a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing.
Based on the current prototype, HPE expects the architecture could easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes. For context, that is 250,000 times the entire digital universe today.
With such a vast amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google's autonomous vehicles and every data set from space exploration, all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds.
"We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society," said Mark Potter, CTO at HPE and director, Hewlett Packard Labs. "The architecture we have unveiled can be applied to every computing category from intelligent edge devices to supercomputers."
Memory-Driven Computing puts memory, not the processor, at the centre of the computing architecture. By eliminating the inefficiencies of how memory, storage and processors interact in traditional systems today, Memory-Driven Computing reduces the time needed to process complex problems from days to hours, hours to minutes, minutes to seconds, to deliver real-time intelligence.
The current prototype Machine has its memory spread across 40 physical nodes, each interconnected using a high-performance fabric protocol, with an optimised Linux-based operating system (OS) running on ThunderX2. Photonics/optical communication links, including the new X1 photonics module, are online and operational. Software programming tools are designed to take full advantage of the abundant persistent memory.
"We think that this is a game-changer," said Kirk Bresniker, Chief Architect at HPE. "This will be the overarching arc for the next 10, 20, 30 years."
New human rights laws are needed to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk, according to a paper from the Institute for Biomedical Ethics in Switzerland.
New human rights laws to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk have been proposed in the open access journal Life Sciences, Society and Policy. The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are:
1. The right to cognitive liberty
2. The right to mental privacy
3. The right to mental integrity, and
4. The right to psychological continuity.
Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: "The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology."
Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.
Professor Roberto Andorno, co-author of the research, explained: "Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court; for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for 'neuromarketing', to understand consumer behaviour and elicit desired responses from customers. There are also tools such as 'brain decoders' which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom, which we sought to address with the development of four new human rights laws."
The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to 'eavesdrop' on someone's mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.
International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualisation of human rights laws and even the creation of new ones.
Marcello Ienca added: "Science fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom."
For the first time, researchers have fabricated printed transistors consisting entirely of two-dimensional nanomaterials.
Credit: AMBER, Trinity College Dublin
Scientists from Advanced Materials and BioEngineering Research (AMBER) at Trinity College, Dublin, have fabricated printed transistors consisting entirely of 2-D nanomaterials for the first time. These materials combine new electronic properties with the potential for low-cost production.
This breakthrough could enable a range of new, futuristic applications – such as food packaging that displays a digital countdown to warn of spoiling, labels that alert you when your wine is at its optimum temperature, or even a window pane that shows the day's forecast. The AMBER team's findings were published yesterday in the leading journal Science.
This discovery opens the path for industry, such as ICT and pharmaceutical firms, to cheaply print a host of electronic devices, from solar cells to LEDs, with applications from interactive smart food and drug labels, to next-generation banknote security and e-passports.
Prof. Jonathan Coleman, an investigator in AMBER and Trinity's School of Physics, commented: "In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging."
A scene from Steven Spielberg's 2002 sci-fi thriller, Minority Report.
"Printed electronic circuitry (made from the devices we have created) will allow consumer products to gather, process, display and transmit information – for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date," he continued. "We believe that 2-D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2-D nanomaterials have the capability to yield more cost effective and higher performance printed devices.
"However, while the last decade has underlined the potential of 2-D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important, because it shows that conducting, semiconducting and insulating 2-D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors, as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2-D nanosheets."
Led by Prof. Coleman, in collaboration with the groups of Prof. Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft, Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor), to form an all-printed, all-nanosheet, working transistor.
Credit: AMBER, Trinity College Dublin
Printable electronics have developed over the last 30 years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2-D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.
The ability to print 2-D nanomaterials is based on Prof. Coleman's scalable method of producing 2-D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick, but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2-D materials in a form that is easy to process into inks. Prof. Coleman's publication provides the potential to print circuitry at extremely low cost, which will facilitate a wide range of applications from animated posters to smart labels.
Prof. Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.
IBM has announced "IBM Q", an initiative to build commercially available universal quantum computing systems.
Credit: IBM Research
IBM has announced an industry-first initiative to build commercially available universal quantum computing systems. “IBM Q” systems and services will be delivered via the IBM Cloud platform. Current technologies that run on classical computers, such as Watson, can help to identify patterns and insights buried in vast amounts of existing data. By contrast, quantum computers will deliver solutions to important problems where patterns cannot be seen because the data doesn’t exist and the calculations needed to answer questions are too enormous to ever be processed by classical computers.
IBM is also launching a new Application Program Interface (API) for the “IBM Quantum Experience” enabling anyone with an Internet connection to use the quantum processor (via the Cloud) for running algorithms and experiments, working with individual quantum bits, and exploring tutorials and simulations of what might be possible with quantum computing. In the first half of 2017, IBM plans to release a full Software Development Kit (SDK) for users to build simple quantum applications and software programs.
“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries.”
Credit: IBM Research
IBM intends to build IBM Q systems to expand the application domain of quantum computing. A key metric will be the power of a quantum computer expressed by the “Quantum Volume” – which includes the number of qubits, quality of operations, connectivity and parallelism. As a first step to increase Quantum Volume, IBM aims to build commercial IBM Q systems with around 50 qubits in the next few years to demonstrate capabilities beyond today’s classical systems, and plans to collaborate with key industry partners to develop applications that exploit the quantum speedup of the systems.
IBM Q systems will be designed to tackle problems that are currently too complex and exponential in nature for classical computing systems to handle. One of the first and most promising applications will be in the area of chemistry. Even for simple molecules like caffeine, the number of quantum states in the molecule can be astoundingly large; so complex that all the conventional computing memory and processing power scientists could ever build could not handle the problem.
IBM’s scientists have recently developed newtechniques to efficiently explore the simulation of chemistry problems on quantum processors and experimental demonstrations of various molecules are in progress. In the future, the goal will be to scale to even more complex molecules and try to predict chemical properties with higher precision than possible with classical computers.
Future applications of quantum computing may include:
• Artificial Intelligence: Making facets of artificial intelligence such as machine learning much more powerful when data sets can be too big such as searching images or video
• Cloud Security: Making cloud computing more secure by using the laws of quantum physics to enhance private data safety
• Drug & Materials Discovery: Untangling the complexity of molecular and chemical interactions leading to the discovery of new medicines and materials
• Financial Services: Finding new ways to model financial data and isolating key global risk factors to make better investments
• Supply Chain & Logistics: Finding the optimal path across global systems of systems for ultra-efficient logistics and supply chains, such as optimising fleet operations for deliveries during the holiday season
“Classical computers are extraordinarily powerful and will continue to advance and underpin everything we do in business and society,” said Tom Rosamilia, senior vice president of IBM Systems. “But there are many problems that will never be penetrated by a classical computer. To create knowledge from much greater depths of complexity, we need a quantum computer. We envision IBM Q systems working in concert with our portfolio of classical high-performance systems to address problems that are currently unsolvable, but hold tremendous untapped value.”
IBM’s roadmap for scaling to practical quantum computers is based on a holistic approach to advancing all parts of the system. The company will leverage its deep expertise in superconducting qubits, complex high performance system integration, and scalable nanofabrication processes from the semiconductor industry to help advance the quantum mechanical capabilities. The developed software tools and environment will also leverage IBM’s world-class mathematicians, computer scientists, and software and system engineers.
"As Richard Feynman said in 1981, ‘…if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.’ This breakthrough technology has the potential to achieve transformational advancements in basic science, materials development, environmental and energy research, which are central to the missions of the Department of Energy (DOE),” said Steve Binkley, deputy director of science, US Department of Energy. “The DOE National Labs have always been at the forefront of new innovation, and we look forward to working with IBM to explore applications of their new quantum systems."
Researchers have developed a new blue-phase liquid crystal that could triple the sharpness of TVs, computer screens, and other displays while also reducing the power needed to run the device.
An international team of researchers has developed a new blue-phase liquid crystal that could enable televisions, computer screens and other displays that pack more pixels into the same space while also reducing the power needed to run the device. The new liquid crystal is optimised for field-sequential colour liquid crystal displays (LCDs), a promising technology for next-generation displays.
"Today's Apple Retina displays have a resolution density of about 500 pixels per inch," said Shin-Tson Wu, who led the research team at the University of Central Florida's College of Optics and Photonics (CREOL). "With our new technology, a resolution density of 1500 pixels per inch could be achieved on the same sized screen. This is especially attractive for virtual reality headsets or augmented reality technology, which must achieve high resolution in a small screen to look sharp when placed close to our eyes."
Although the first blue-phase LCD prototype was demonstrated by Samsung in 2008, the technology still hasn't moved into production, because of problems with high operation voltage and slow capacitor charging time. To tackle these problems, Wu's research team worked with collaborators from liquid crystal manufacturer JNC Petrochemical Corporation in Japan and display manufacturer AU Optronics Corporation in Taiwan.
In the journal Optical Materials Express, the team explains how combining the new liquid crystal with a special performance-enhancing electrode structure can achieve light transmittance of 74 percent, with 15 volts per pixel – operational levels that could finally be practical for commercial applications.
"Field-sequential colour displays can be used to achieve the smaller pixels needed to increase resolution density," explains Yuge Huang, first author of the paper. "This is important, because the resolution density of today's technology is almost at its limit."
Today's LCD screens contain a thin layer of nematic liquid crystal through which the incoming white LED backlight is modulated. Thin-film transistors deliver the required voltage that controls light transmission in each pixel. The LCD subpixels contain red, green and blue filters that are used in combination to produce different colours to the human eye. The colour white is created by combining all three colours.
Blue-phase liquid crystal can be switched, or controlled, about 10 times faster than the nematic type. This sub-millisecond response time allows each LED colour (red, green and blue) to be sent through the liquid crystal at different times and eliminates the need for colour filters. The LED colours are switched so quickly that our eyes can integrate red, green and blue to form white.
"With colour filters, the red, green and blue light are all generated at the same time," said Wu. "However, with blue-phase liquid crystal, we can use one subpixel to make all three colours – but at different times. This converts space into time, a space-saving configuration of two-thirds, which triples the resolution density."
The blue-phase liquid crystal also triples the optical efficiency because the light doesn't have to pass through colour filters, which limit transmittance to about 30 percent. Another big advantage is that the displayed colour is more vivid because it comes directly from red, green and blue LEDs, which eliminates the colour crosstalk that occurs with conventional filters.
Wu's team worked with JNC to reduce the blue-phase liquid crystal's dielectric constant to a minimally acceptable range, to reduce the transistor charging time and get submillisecond optical response time. However, each pixel still needed slightly higher voltage than a single transistor could provide. To overcome this problem, the researchers implemented a protruded electrode structure that lets the electric field penetrate the liquid crystal more deeply. This lowered the voltage needed to drive each pixel while maintaining a high light transmittance.
"We achieved an operational voltage low enough to allow each pixel to be driven by a single transistor while also achieving a response time of less than a millisecond," said Haiwei Chen, a doctoral student in Wu's lab. "This delicate balance between operational voltage and response time is key for enabling field sequential colour displays."
Wu predicts that a working prototype could be available in the next year.