Qualcomm has announced the first 5G mobile connection, with a data transfer speed of more than 1 Gbps.
Qualcomm Technologies Inc. has successfully achieved a 5G data connection on a 5G modem chipset for mobile devices. The Qualcomm Snapdragon X50 5G modem chipset delivered gigabit speeds and a data connection in the 28GHz mmWave radio frequency band, demonstrating the next generation of cellular technology for business and consumers. Additionally, Qualcomm previewed its first 5G smartphone reference design for the testing and optimisation of 5G within the power and form-factor constraints of a handheld phone.
"Achieving the world's first announced 5G data connection is truly a testament to Qualcomm's leadership and extensive expertise in mobile connectivity," said Cristiano Amon, executive vice president of Qualcomm Technologies, Inc. "This major milestone and our 5G smartphone reference design showcase how Qualcomm Technologies is driving 5G NR in mobile devices to enhance mobile broadband experiences for consumers around the world."
The 5G demonstration took place in Qualcomm Technologies' laboratories in San Diego. 5G NR mmWave is a new frontier for mobile, now made possible through the 5G NR standard, and is expected to usher in the next generation of user experiences and to significantly increase network capacity. It will support the emerging "Internet of Things" (IoT), providing widespread automation and connectivity of devices, systems and services.
Qualcomm has been instrumental in accelerating the commercialisation of 5G NR, through many key contributions – including foundational research and inventions, standards-setting in 3GPP, designing sub-6 GHz and mmWave 5G NR prototype systems, interoperability and over-the-air trials with major global operators and infrastructure vendors, and developing integrated circuit products for mobile devices. The Snapdragon X50 5G NR modem family is expected to support commercial launches of 5G smartphones and networks in the first half of 2019.
SanDisk has just unveiled a 400 gigabyte (GB) microSD card, which it claims is the world's highest capacity.
SanDisk, a subsidiary of Western Digital Corporation, has announced the launch of its 400GB Ultra microSDXC UHS-I card, which it claims to be the world's highest-capacity microSD card. Just two years after introducing its record-breaking 200GB microSD card, SanDisk has doubled the capacity within the same tiny form factor.
"Mobile devices have become the epicentre of our lives, and consumers are now accustomed to using their smartphones for anything from entertainment to business. We are collecting and sharing massive amounts of data on smartphones, drones, tablets, PCs, laptops and more. We anticipate that storage needs will only continue to grow as people continue to expect more sophisticated features on their devices and desire higher quality content," said Jeff Janukowicz, vice president of research. "We estimate that mobile device users worldwide will install over 150 billion applications alone this year, which require a ton of memory on all of our favourite devices."
SanDisk has achieved this capacity breakthrough by leveraging its proprietary memory technology and design and production processes that allow for more bits per die.
"We continue to push technology boundaries and transform the way consumers use their mobile devices," said Sven Rathjen, vice president of product marketing at Western Digital. "By focusing on achieving new technology milestones, we enable consumers to keep up with their mobile-centric lifestyles with storage solutions they trust."
The new card holds up to 40 hours of Full HD video and offers transfer speeds of up to 100MB/s, meaning it can move up to 1,200 photos per minute. Additionally, it meets the A1 App Performance Class specification, which means that the card can load apps faster.
Storage capacities of SD and microSD cards are an excellent example of the exponential growth seen in many forms of information technology in recent years and decades. On current trends, it appears likely that the first 1TB microSD card will arrive by around 2020-21.
Researchers at Brown University report the transmission of data through a terahertz multiplexer at 50 gigabits per second, which could lead to a new generation of ultra-fast Wi-Fi.
Credit: Mittleman lab / Brown University
Multiplexing, the ability to send multiple signals through a single channel, is a fundamental feature of any voice or data communication system. A team of researchers has now demonstrated a method for multiplexing data carried on terahertz waves – high-frequency radiation that could enable the next generation of ultra-high bandwidth wireless networks.
Writing in the journal Nature Communications, they describe the transmission of two real-time video signals through a terahertz multiplexer at an aggregate data rate of 50 gigabits per second, 100 times the optimal data rate of today's fastest cellular network.
"We showed that we can transmit separate data streams on terahertz waves at very high speeds and with very low error rates," said Daniel Mittleman, professor in Brown's University's School of Engineering and the paper's corresponding author. "This is the first time anybody has characterised a terahertz multiplexing system using actual data, and our results show that our approach could be viable in future terahertz wireless networks."
Current voice and data networks use microwaves to carry signals wirelessly. But like most forms of information technology, the demand for data transmission is growing exponentially, and quickly becoming more than microwave networks can handle. Terahertz waves have higher frequencies than microwaves and therefore a much larger capacity to carry data. However, scientists have only just begun experimenting with terahertz frequencies and many of the basic components needed for such communication don't exist yet.
A system for multiplexing and demultiplexing (also known as mux/demux) is one of those basic components. It's a technology that allows one cable to carry multiple TV channels, or hundreds of users to access a Wi-Fi network.
The mux/demux approach Mittleman and his colleagues developed uses two metal plates placed parallel to each other to form a waveguide, as shown in the illustration below. One plate has a slit cut into it. When a terahertz wave travels through the waveguide, some of the radiation leaks out of the slit. The angle at which radiation beams escape is dependent upon the frequency of the wave.
"We can put several waves at several different frequencies – each of them carrying a data stream – into the waveguide, and they won't interfere with each other because they're different frequencies; that's multiplexing," Mittleman said. "Each of those frequencies leaks out of the slit at a different angle, separating the data streams; that's demultiplexing."
Due to the nature of terahertz waves, signals in terahertz communications networks will propagate as directional beams, not omnidirectional broadcasts like in existing wireless systems. This directional relationship between propagation angle and frequency is key to enabling mux/demux in terahertz systems. A user at a particular location (and therefore at a particular angle from the multiplexing system) will communicate on a particular frequency.
Credit: Mittleman lab / Brown University
In 2015, Mittleman's team first published a paper describing their waveguide concept. For that initial work, they used a broadband terahertz light source to confirm that different frequencies did indeed emerge from the device at different angles. While that was an effective proof of concept, this latest work took the critical step of testing the device with real data.
The team encoded two high-definition television broadcasts, beamed together into the multiplexer system. Their experiments showed that transmissions were error-free up to 10 gigabits per second, which is much faster than today's standard Wi-Fi speeds. Error rates increased somewhat when the speed was boosted to 50 gigabits per second (25 gigabits per channel), but were still well within the range that can be fixed using forward error correction, which is commonly used in today's communications networks.
The researchers plan to continue developing this and other terahertz components. Mittleman recently received a license from the FCC to perform outdoor tests at terahertz frequencies on the Brown University campus.
"We think that we have the highest-frequency license currently issued by the FCC, and we hope it's a sign that the agency is starting to think seriously about terahertz communication," he said. "Companies are going to be reluctant to develop terahertz technologies until there's a serious effort by regulators to allocate frequency bands for specific uses, so this is a step in the right direction."
A major breakthrough in artificial intelligence has been announced, with a computer beating the world's best players at competitive eSports for the first time.
OpenAI is a non-profit research company established in 2015 whose founders include Elon Musk. They report that, for the first time, a computer program has beaten the world's best human players at competitive eSports – in this case, the game Defence of the Ancients 2 (aka Dota 2).
Earlier this year, Google's AlphaGo defeated the world's number one player at Go, a board game similar to chess but with far more complexity. Dota 2, however, is orders of magnitude more complex than even Go. This multiplayer battle arena pits two teams of five against each other, with each team occupying and defending their own separate base on a map. Each of the ten players independently controls a powerful character, known as a "hero", who all have unique abilities and styles of play. There are 113 to choose from. During a match, a player and their team collects experience points and items for their heroes in order to fight the opposing team's heroes and other defences. A team wins by being the first to destroy a large structure in the enemy base called the "Ancient".
This week, at the annual tournament hosted in Seattle, OpenAI has been demonstrating its bot, which mastered the game from scratch by self-play and does not use imitation learning or tree search. According to Greg Brockman, the company's Chief Technology Officer, self-playing is a more effective way for AI to learn complex tasks – as opposed to fighting much weaker enemies, or overwhelmingly strong ones. By playing against a copy of itself, the bot always has a worthy opponent and therefore remembers more useful information. After many thousands of practice runs, it gradually learned which moves worked best and which to avoid. It developed a number of behaviours that are demonstrated in the video below.
Whereas chess uses an 8 x 8 board, and the game Go has a 19 x 19 board (with each cell being either blank or occupied), Dota 2 is vastly more complex with a gaming "board" of 15,000 x 15,000 units, a lot of hidden information, and many more variables with every action. The OpenAI bot was able to handle this level of detail, however.
In a series of 1v1 matches, it went up against the top human champions in the world, beating them all. This included SumaiL (world's best 1v1 player) and Arteezy (top overall player in the world), as well as 27-year-old Danil "Dendi" Ishutin. At times, the movements of the bot were eerily human-like, with Dendi saying it "feels a little like [a] human, but a little like something else."
Elon Musk today praised OpenAI on Twitter. He has previously warned that AI represents a "fundamental risk to the existence of civilisation" and hopes the organisation will foster the creation of safer, more benevolent forms of machine intelligence.
OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.
OpenAI is now developing its bot further, and hopes it can take part in a proper five-on-five match next year, as opposed to just 1v1. The company will also explore using its neural network for other games.
Researchers in the U.S. have created "smart windows" that rapidly change opacity, depending on how sunny it is. This new technology could cut utility costs.
Engineers at Stanford University have created dynamically changing windows that can switch from transparent to opaque in only a minute – and back again in just 20 seconds – a major improvement over dimming windows currently being installed to reduce cooling costs in some buildings.
The newly designed "smart" windows consist of conductive glass plates outlined with metal ions that spread out over the surface, blocking light in response to an electrical current. The results are described in the 9th August edition of the journal Joule.
"We're excited because dynamic window technology has the potential to optimise the lighting in rooms or vehicles, and save about 20 percent in heating and cooling costs," said Michael McGehee, a professor of materials science and engineering at Stanford and senior author of the study. "It could even change the way people wear sunglasses."
Credit: Barile et al./Joule 2017
The researchers have filed a patent for their new technology and have entered into discussions with glass manufacturers and other potential partners. However, more research is needed to make the surface area of the windows large enough for commercial applications. The prototypes used in the study are only about 4 square inches (25 square centimetres) in size. The team also wants to reduce manufacturing costs to be competitive with dynamic windows already on the market.
"This is an important area that is barely being investigated at universities," McGehee said. "There's a lot of opportunity to keep us motivated."
Commercially available smart windows are made of materials, such as tungsten oxide, that change colour when charged with electricity. But these tend to be expensive, have a blue tint, can take more than 20 minutes to dim, and become less opaque over their lifetime. The Stanford prototype blocks light through the movement of a copper solution over a sheet of indium tin oxide, modified with platinum nanoparticles.
Credit: Barile et al./Joule 2017
When transparent, the window is clear and lets roughly 80 percent of incoming natural light pass through. When dark, the transmission of light drops below five percent. To test its durability, the researchers switched the windows on and off more than 5,000 times and saw no degradation in the transmission of light.
"We've had a lot of moments where we thought, how is it even possible we've made something that works so well so quickly?" McGehee said. "We didn't tweak what was out there. We came up with a completely different solution."
Perhaps in some future decade, with further advances in nanotechnology, smart windows could be developed that respond to sunlight in real time and instantly change colour – while being affordable enough to feature as standard in every building and vehicle.
Varjo, a tech startup based in Helsinki, Finland, has today unveiled a new VR/AR technology it has been developing in secret. This features nearly 70 times the pixel count of current generation headsets and is sufficient to match human eye resolution.
Varjo ("Shadow" in Finnish) Technologies today announced it has emerged from stealth and is now demonstrating the world's first human eye-resolution headmounted display for upcoming Virtual Reality, Augmented Reality and Mixed Reality (VR/AR/MR) products. Designed for professional users and with graphics an order of magnitude beyond any currently shipping or announced head-mounted display, this major advancement will enable unprecedented levels of immersion and realism.
This breakthrough is accomplished by Varjo's patented technology that replicates how the human eye naturally works, creating a super-high-resolution image to the user's gaze direction. This is further combined with video-see-through (VST) technology for unparalleled AR/MR capabilities.
Codenamed "20|20" after perfect vision, Varjo's prototype is based on unique technology created by a team of optical scientists, creatives and developers who formerly occupied top positions at Microsoft, Nokia, Intel, Nvidia and Rovio. It will be shipping in Varjo-branded products specifically for professional users and applications starting in late Q4, 2017.
"Varjo's patented display innovation pushes VR technology 10 years ahead of the current state-of-the-art, where people can experience unprecedented resolution of VR and AR content limited only by the perception of the human eye itself," said Urho Konttori, CEO and founder. "This technology – along with Varjo VST – jump-starts the immersive computing age overnight: VR is no longer a curiosity, but now can be a professional tool for all industries."
Researchers from the Netherlands and Germany have identified seven risk genes for insomnia.
An international team of researchers has found, for the first time, seven risk genes for insomnia. This discovery is an important step forward in understanding the biological mechanisms of sleep. In addition, it proves that insomnia is not, as is often claimed, a purely psychological condition.
Insomnia is among the most common health complaints – affecting between 10% and 30% of adults worldwide at any given point in time and up to half in a given year. Even after treatment, poor sleep can remain a persistent vulnerability for many people. Professor Van Someren, a sleep specialist from the Vrije Universiteit Amsterdam (VU), believes his team's findings could lead to an understanding of insomnia at the level of communication within and between neurons, providing new ways of treating the condition. He also hopes this breakthrough will improve the recognition of insomnia.
"Compared to the severity, prevalence and risks of insomnia, only few studies targeted its causes," he says. "Insomnia is all too often dismissed as being 'all in your head'. Our research brings a new perspective: insomnia is also in the genes."
From a sample of 113,000 individuals, the researchers found seven genes for insomnia. These play a role in the regulation of transcription, the process where DNA is read in order to make an RNA copy of it, and exocytosis, the release of molecules by cells in order to communicate with their environment. One of the identified genes, MEIS1, has previously been related to two other sleep disorders: Periodic Limb Movements of Sleep (PLMS) and Restless Legs Syndrome (RLS). By collaborating with Konrad Oexle and colleagues from the Institute of Neurogenomics in Munich, Germany, they concluded that variants in the gene seem to contribute to all three disorders. Strikingly, PLMS and RLS are characterised by restless movement and sensation, respectively, whereas insomnia is characterised mainly by a restless stream of consciousness.
The researchers also found a strong genetic overlap with other traits – such as anxiety disorders, depression, neuroticism, and low subjective wellbeing: "This is an interesting finding, because these characteristics tend to go hand in hand with insomnia. We now know that this is partly due to the shared genetic basis," says neuroscientist Anke Hammerschlag (VU), PhD student and first author of the study.
The team also studied whether the same genetic variants were important for men and women. "Part of the genetic variants turned out to be different," says Professor Danielle Posthuma, a statistical geneticist at VU Amsterdam. "This suggests that, for some part, different biological mechanisms may lead to insomnia in men and women. We also found a difference between men and women in terms of prevalence: in the sample we studied, including mainly people older than 50, 33% of the women reported to suffer from insomnia. For men, this was 24%."
Chipmaker Intel has announced a new generation of processors, including the Core i9 series, its first teraflop desktop CPUs.
Intel has this week introduced a new family of microprocessors – the Core X-series – which the company describes as the most scalable, accessible and powerful desktop platform ever developed. This includes a new Core i9 brand and Core i9 Extreme Edition, the first consumer desktop CPU with 18 cores and 36 threads. The company is also launching the Intel X299, which adds even more I/O and overclocking capabilities.
Given their extreme power and speed, this family of processors is being pitched at gamers, content creators, and overclocking enthusiasts. Intel expects to increase its presence in high-end desktop markets and believes that customers will pay premiums in exchange for higher performance. Prices for the i9 line-up will range from $999 to $1999.
Prior to this announcement, Intel's high-end desktop processors (known as Broadwell-E) came with six, eight or 10 core options. The Core X-series will include five Core i9 chips, with a minimum of 10 cores and the top-end i9-7980 featuring a massive 18 cores. A major update has also been announced for Intel's Turbo Boost Max Technology 3.0, which will identify the two top cores and direct critical workloads to those, for a big jump on single or multithreaded performance.
The Core i9-7980 will be the first Intel consumer processor to exceed a teraflop of computing power, meaning it can perform a trillion computational operations every second. To put this in perspective, that is equal to the ASCI Red, which reigned as the world's most powerful supercomputer from 1997 until the year 2000. All Core i9 chips will have 3.3GHz base clock speeds, with up to 4.5GHz using Turbo Boost 3.0, and up to 44 PCIe lanes.
"The possibilities with this type of performance are endless," says Gregory Bryant, a senior vice president, in a blog post. "Content creators can have fast image rendering, video encoding, audio production and real-time preview – all running in parallel seamlessly so they spend less time waiting and more time creating. Gamers can play their favourite game, while they also stream, record and encode their gameplay, and share on social media – all while surrounded by multiple screens for a 12K experience with up to four discrete graphics cards."
In addition to Core i9, there are also three new i7 chips and an i5, including the quad-core i5-7640X and i7 models in 4, 6 and 8-core variants. Prices will range from $242 for the i5, to $599 for the i7-7820X.
New human rights laws are needed to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk, according to a paper from the Institute for Biomedical Ethics in Switzerland.
New human rights laws to prepare for advances in neurotechnology that may put the 'freedom of the mind' at risk have been proposed in the open access journal Life Sciences, Society and Policy. The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are:
1. The right to cognitive liberty
2. The right to mental privacy
3. The right to mental integrity, and
4. The right to psychological continuity.
Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: "The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology."
Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.
Professor Roberto Andorno, co-author of the research, explained: "Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court; for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for 'neuromarketing', to understand consumer behaviour and elicit desired responses from customers. There are also tools such as 'brain decoders' which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom, which we sought to address with the development of four new human rights laws."
The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to 'eavesdrop' on someone's mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.
International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualisation of human rights laws and even the creation of new ones.
Marcello Ienca added: "Science fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom."