South Korean electronics giant Samsung has successfully developed the world's first adaptive array transceiver operating in the millimetre-wave Ka bands for cellular communications.
This new technology will be central to 5G mobile communication systems and will provide data transfer speeds up to several hundred times faster than current 4G networks.
Future networks will require a broad range of frequencies. It was previously assumed that millimetre-wave bands would be impractical over long distances, due to extremely high atmospheric attenuation and other problems. However, the new adaptive array transceiver made by Samsung, which includes 64 antenna elements, has been shown to overcome loss of radio propagation. Data is transmitted over 2 km at a frequency of 28 GHz, much higher than conventional bands which typically range from several hundred MHz to several GHz.
In addition to speeds of 1 gigabit per second (allowing users to download an entire film in a second, or enjoy real-time streaming of ultra HD), 5G will provide a number of other features designed to improve reliability, compatibility, service and user experience. Samsung now plans to accelerate its research and development of 5G mobile and estimates it will be ready for commercialisation in 2020. Other regions too – including Europe and China – plan to bring these services to market by that year.
Already released to developers, Google Glass will have its consumer launch by early 2014. This video is from Playground – a digital creative agency based in Toronto. In the company's own words:
"For us at Playground, Google Glass is exciting. We are constantly trying to dissect the human-technology relationship and Glass represents information technology at its most intimate. The Explorer edition Glass and its Mirror API is an amazing techno-social experiment, but it is an experiment with limitations. We wanted to visualize what Glass may do as the platform matures past today's limits."
Researchers in Spain have managed to give graphene magnetic properties. This breakthrough, published in the journal Nature Physics, opens the door to the development of graphene-based spintronic devices; that is, devices based on the spin or rotation of the electron, which could transform the electronics industry.
TCNQ molecules on graphene layer, where they acquire a magnetic order. Credit: IMDEA-Nanoscience
Scientists were already aware that graphene, an incredible material formed of a mesh of hexagonal carbon atoms, has extraordinary conductivity, mechanical and optical properties. Now it is possible to give it yet one more property: magnetism, implying a breakthrough in electronics.
This is revealed in a study that the Madrid Institute for Advanced Studies in Nanoscience (IMDEA-Nanociencia) and Autonoma Autonomous (UAM) and Complutense (UCM) universities of Madrid have just published in Nature Physics. Researchers managed to create a hybrid surface from this material that behaves as a magnet.
Prof. Rodolfo Miranda, Director of IMDEA-Nanociencia: "In spite of the huge efforts to date of scientists all over the world, it has not been possible to add the magnetic properties required to develop graphene-based spintronics. However, these results pave the way to this possibility."
Spintronics is based on the charge of the electron, as in traditional electronics, but also on its "spin", which determines its magnetic moment. Material is magnetic when most of its electrons have the same spin. As the spin can have two values, its use adds two more states to traditional electronics. This positioning can be translated into a binary signal (1 or 0). Thus, both data processing speed and quantity of data to be stored on electronic devices can be increased, with applications in fields such as telecommunications, computing, energy and biomedicine.
A TCNQ molecule on the graphene mesh which in turn has been
grown on a ruthenium crystal. Credit: IMDEA-Nanoscience
In order to develop a graphene-based spintronic device, the challenge was to 'magnetise' the material, and researchers from Madrid found how through the quantum and nanoscience world. The technique involved growing an ultra-precise graphene film over a ruthenium single crystal, inside an ultra-high vacuum chamber where organic molecules of tetracyano-p-quinodimethane (TCNQ) are evaporated on the graphene surface. TCNQ is a molecule that acts as a semiconductor at very low temperatures in certain compounds.
On observing results through a scanning tunnelling microscope (STM), scientists were surprised: organic molecules had organised themselves and were regularly distributed all over the surface, interacting electronically with the graphene-ruthenium substrate.
"We have proved in experiments how the structure of the TCNQ molecules over graphene acquires long-range magnetic order, with electrons positioned in different bands according to their spin," clarifies Prof. Amadeo Vázquez de Parga.
Meanwhile, his colleague Prof. Fernando Martin has conducted modelling studies that have shown that, although graphene does not interact directly with the TCNQ, it does permit a highly efficient charge transfer between the substrate and the TCNQ molecules and allows the molecules to develop long-range magnetic order.
The result is a new graphene-based magnetised layer, which paves the way towards the creation of devices based on what was already considered as the material of the future, but which now may also have magnetic properties.
Japanese electronics firm Hitachi has unveiled "ROPITS" – Robot for Personal Intelligent Transportation System.
ROPITS is designed to aid the short-distance transportation of the elderly, or those with walking difficulties. The vehicle is equipped with a "specified arbitrary point autonomous pick-up and drop-off function" which can navigate to locations specified on a tablet or mobile device. Thanks to its small size and slow speed (3.7 mph, or 5.9 km/h), it can move safely across pavements, squares and open areas without being restricted to roads. On-board sensors provide a 360° view of the surrounding environment, allowing it to sense and react to pedestrians. Actuators and shock absorbers keep the body constantly maintained in a level position (horizontal state), so uneven surfaces can be handled without losing balance.
This and other such vehicles may be needed to support future societies with a higher proportion of old people than today. Japan faces a particular problem in this regard, having the largest proportion of elderly citizens in the world. The nation's elderly population, aged 65+, comprised 20% of its population in 2006, a figure that is forecast to reach 40% by 2060.
Hitachi claims that ROPITS could also be used as an autonomous delivery vehicle for a variety of services. The company intends to continue testing its vehicle and will present further details at the Robotics and Mechatronics Conference 2013, ROBOMEC 2013, to be held in the Tsukuba Special District from 22nd-25th May.
The system, called Zoe, bears a striking resemblance to Holly, the ship's computer in British sci-fi comedy, Red Dwarf. It is based on a template that, in the near future, could allow people to upload their own faces and voices. Users would be able to customise and personalise their own digital assistants for a range of applications – in mobile "face messages", gaming, audio-visual books, as a means of delivering online lectures or presentations, and in various user interfaces.
Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge: “This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being.”
As well as being more expressive than any previous system, Zoe is also remarkably data-light. The program used to run her is just tens of megabytes in size, which means that it can be easily incorporated into even the smallest computer devices, including tablets and smartphones.
It works by using a set of fundamental, “primary colour” emotions. Zoe’s voice, for example, has six basic settings – Happy, Sad, Tender, Angry, Afraid and Neutral. The user can adjust these settings to different levels, as well as altering the pitch, speed and depth of the voice itself. By combining these levels, it becomes possible to pre-set or create almost infinite emotional combinations.
Samsung this week unveiled its latest flagship smartphone – the Galaxy S4. Even lighter and thinner than its predecessor, it features a 13-megapixel back camera, and a 5-inch display with 441 ppi (1920×1080) resolution. It will be available in late April, on 327 networks and in 155 countries.
Other new features include:
• Eye-tracking: pause video and scroll through pages using eye movements alone
• Dual Camera: take simultaneous photos and videos, using both rear and front cameras, and blend them together
• Air View: hover with your fingers to preview the content of an email, S-Planner, image gallery or video without having to open it
• Air Gesture: change a music track, scroll up and down a web page, or accept a call with a wave of your hand
• Story Album: curates content, such as SNS posts, memos, location and weather information, as well as photos and videos, to create a photo album which is personalised around your timeline of special occasions and events
• Group Play: means you can enjoy music, photos and games with those around you, without requiring a Wi-Fi AP or cellular signal
• S-Health software: empowers your life by keeping you up-to-date with health and wellbeing information through a range of accessories
• Samsung WatchON: the Galaxy S4 will transform into an IR remote to control your home entertainment system including TV, set-top box, DVD player and even air conditioner.
Telecoms expert Ernest Doku from uSwitch.com: "The debut of nifty eye motion-sensitive controls allowing users to pause video and scroll through pages with eye movements alone is smart. For commuters crammed in trains – or just those who love a bit of futuristic tech that makes their lives easier – this novel feature will really help the Galaxy S4 to stand out."
Imagine that the chips in your smart phone or computer could repair and defend themselves on the fly, recovering in microseconds from less-than-ideal battery power or total transistor failure. It might sound like the stuff of science fiction, but a team of engineers, for the first time ever, has developed just such a system.
The team, from the High-Speed Integrated Circuits laboratory at the California Institute of Technology (Caltech), has demonstrated this self-healing capability in tiny power amplifiers. The amplifiers are so small, in fact, that 76 of the chips – including everything they need to self-heal – could fit on a single penny. In perhaps the most dramatic of their experiments, the team destroyed various parts of their chips by zapping them multiple times with a high-power laser, and then observed as the chips automatically developed a work-around in less than a second.
Ali Hajimiri, Professor of Electrical Engineering: "It was incredible the first time the system kicked in and healed itself. It felt like we were witnessing the next step in the evolution of integrated circuits. We had literally just blasted half the amplifier and vaporised many of its components, such as transistors, and it was able to recover to nearly its ideal performance."
Until now, even a single fault has often rendered an integrated-circuit chip completely useless. The Caltech engineers wanted to give integrated-circuit chips a healing ability akin to that of our own immune system – something capable of detecting and quickly responding to any number of possible assaults in order to keep the larger system working optimally. The power amplifier they devised employs a multitude of robust, on-chip sensors that monitor temperature, current, voltage, and power. The information from those sensors feeds into a custom-made application-specific integrated-circuit (ASIC) unit on the same chip, a central processor that acts as the "brain" of the system. The brain analyses the amplifier's overall performance and determines if it needs to adjust any of the system's actuators – the changeable parts of the chip.
Interestingly, the chip's brain does not operate based on algorithms that know how to respond to every possible scenario. Instead, it draws conclusions based on the aggregate response of the sensors. "You tell the chip the results you want and let it figure out how to produce those results," says Steven Bowers, a graduate student in Hajimiri's lab and lead author of the new paper. "The challenge is that there are more than 100,000 transistors on each chip. We don't know all of the different things that might go wrong, and we don't need to. We have designed the system in a general enough way that it finds the optimum state for all of the actuators in any situation without external intervention."
Looking at 20 different chips, the team found that the amplifiers with the self-healing capability consumed about half as much power as those without, and their overall performance was much more predictable and reproducible. "We have shown that self-healing addresses four very different classes of problems," says Kaushik Dasgupta, another graduate student also working on the project. The classes of problems include:
static variation that is a product of variation across components;
long-term aging problems that arise gradually as repeated use changes the internal properties of the system;
short-term variations that are induced by environmental conditions such as changes in load, temperature, and differences in the supply voltage;
and, finally, accidental or deliberate catastrophic destruction of parts of the circuits.
The Caltech team chose to demonstrate this self-healing capability first in a power amplifier for millimeter-wave frequencies. Such high-frequency integrated chips are at the cutting edge of research and are useful for next-generation communications, imaging, sensing, and radar applications. By showing that the self-healing capability works well in such an advanced system, the researchers hope to show that the self-healing approach can be extended to virtually any other electronic system.
"Bringing this type of electronic 'immune system' to integrated-circuit chips opens up a world of possibilities," says Hajimiri. "It is truly a shift in the way we view circuits and their ability to operate independently. They can now both diagnose and fix their own problems without any human intervention, moving one step closer to indestructible circuits."
Researchers of five European universities have developed a cloud-computing platform for robots. The platform allows robots connected to the Internet to directly access the powerful computational, storage, and communications infrastructure of modern data centers – the giant server farms behind the likes of Google, Facebook, and Amazon – for robotics tasks and robot learning.
With the development of the RoboEarth Cloud Engine, the team continues their work towards creating an Internet for robots. The new platform extends earlier work on allowing robots to share knowledge with other robots via a WWW-style database, greatly speeding up robot learning and adaptation in complex tasks.
More intelligent robots
The developed Platform as a Service (PaaS) allows robots to perform complex functions like mapping, navigation or processing of human voice commands within the cloud, at a fraction of the time required by robots' on-board computers. By making enterprise-scale computing infrastructure available to any robot with a wireless connection, the researchers believe that the new computing platform will help pave the way towards lighter, cheaper, more intelligent robots.
Mohanarajah Gajamohan, from the Swiss Federal Institute of Technology (ETH Zurich) and Technical Lead of the project: "The RoboEarth Cloud Engine is particularly useful for mobile robots, such as drones or autonomous cars, which require lots of computation for navigation. It also offers significant benefits for robot co-workers, such as factory robots working alongside humans, which require large knowledge databases, and for the deployment of robot teams."
Dr. Heico Sandee, RoboEarth's Program Manager at Eindhoven University of Technology in the Netherlands: "On-board computation reduces mobility and increases cost. With the rapid increase in wireless data rates caused by the booming demand of mobile communications devices, more and more of a robot's computational tasks can be moved into the cloud."
Impact on jobs
While high-tech companies that heavily rely on data centers have been criticised for creating fewer jobs than traditional companies (e.g. Google and Facebook employ less than half the number of workers of General Electric or Hewlett-Packard per dollar in revenue), the researchers don't believe that this new robotics platform should be cause for alarm. According to a recent study by the International Federation of Robotics and Metra Martech entitled "Positive Impact of Industrial Robots on Employment," robots don't kill jobs but rather tend to lead to an overall growth in jobs. Whether this trend remains true in the coming decades, however, remains to be seen.
In a world first, researchers have successfully replaced 75 per cent of an injured patient's skull with a precision 3D-printed polymer version. In the near future, any type of damaged bone could routinely be replaced with custom-manufactured implants.
Connecticut-based Oxford Performance Materials (OPM) was founded in 2000 with a simple purpose: to exploit a highly advanced molecule known as Polyether ether ketone (PEEK). PEEK is an ultra-high performance thermoplastic with a range of applications. Its advantages include high strength and toughness, chemical resistance and low toxicity.
After years of development, OPM has now gained FDA approval for its OsteoFab Patient Specific Cranial Device (OPSCD). This is the company's brand name for 3D-printed medical implants created from the PEEK polymer. These implants are "grown" layer by layer – directly from a digital CAD file, CT scan or MRI file – without the aid of tooling and with few practical limits on what can be produced. As such, OsteoFab is ideal for unique, one-off implants that are specifically shaped to an individual patient's anatomy.
One very desirable use of patient-specific implants is the replacement of bony voids in the skull. This was demonstrated by OPM's researchers on 4th March, when they scanned the head of an unnamed patient and replaced 75% of his skull with polymer components.
Scott DeFelice, President and CEO of OPM: "It is our firm belief that the combination of PEEK and Additive Manufacturing is a highly transformative and disruptive technology platform that will substantially impact all sectors of the orthopaedic industry. We have sought our first approval within cranial implants because the need was most compelling; however, this is just the beginning. We will now move systematically throughout the body, in an effort to deliver improved outcomes at lower overall cost to the patient and healthcare provider."
Up to 500 U.S. patients could use skull bone replacements every month, according to DeFelice. Possible patients include those with cancerous bone, as well as car accident victims, and military members suffering from head trauma.
FDA clearance for OPM marks the first approval of an additively manufactured polymer implant in the USA. The company now intends to seek well-qualified partners to bring this revolutionary process to market.
Last year, a similar procedure was undertaken in the Netherlands, when surgeons used 3D printing to replace the entire lower jawbone of an 83-year old woman.
Peter Diamandis is the founder and chairman of X PRIZE Foundation, co-founder and chairman of Singularity University and the co-author of Abundance: The Future Is Better Than You Think. He is also co-founder of the asteroid mining company, Planetary Resources. In this video, he discusses the future of humans evolving into meta-intelligence group-minds and invites participants to the second international Global Future 2045 congress (June 2013).
The rise of connected devices will drive mobile data revenues past voice revenues globally by 2018, according to a new report from the Global System Mobile Association (GSMA). This data explosion will provide better access to healthcare and education, help lift people out of poverty, fight hunger and reduce carbon emissions.
Mobile data is being driven by a surge in demand for connected devices and machine-to-machine (M2M) communications, as we accelerate towards a truly networked world. This is transforming the socioeconomic future of people in both developed and developing countries. The new GSMA report, produced in collaboration with PwC, reveals how innovative mobile connected products and services will revolutionise people's lives over the next five years:
In developed countries:
Mobile health could save $400 billion in healthcare costs in OECD countries
Connected cars could save one in nine lives through emergency calling services, providing quicker and more accurate location and response times
Mobile education can reduce student drop-outs by eight per cent
Smart metering can cut carbon emissions by 27 million tonnes – the equivalent of planting 1.2 billion trees
In developing countries:
Mobile health could save one million lives in sub-Saharan Africa
Automotive data will improve food transport and storage, helping feed more than 40 million people annually – equivalent to the entire population of Kenya
Mobile education can enable 180 million students to further their education
Smart cities with intelligent transport systems could reduce commute times by 35 per cent, giving commuters back a whole week each year
Michael O'Hara, Chief Marketing Officer, GSMA: "Mobile data is not just a commodity, but is becoming the lifeblood of our daily lives, society and economy, with more and more connected people and things. This is an immense responsibility and the mobile industry needs to continue collaborating with governments and key industry sectors to deliver products and services that help people around the world improve their businesses and societies."
The increase in mobile operator data revenues is a global trend, across both developed and emerging markets. In 2012, Japan became the first country where data revenues exceeded voice revenues, due largely to the availability of advanced mobile broadband networks and a higher adoption of the latest smartphones, tablets and connected devices. This year, Argentina's data revenues will exceed voice revenues – attaining this milestone ahead of the US and UK, which will reach this point in 2014. Kenya will experience this shift in 2016, with global revenues following in 2018 as mobile broadband continues to thrive.
In a feat that sounds like science fiction, researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve behavioural puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart, via the Internet.
Credit: Duke University Medical Center
The results of these projects suggest the future potential for linking multiple brains, to form what the research team calls an "organic computer." This could allow sharing of motor and sensory information among groups of animals – and perhaps, eventually, humans. The study was published today in the journal Scientific Reports.
Miguel Nicolelis, PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine: "Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought. In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we then asked was, 'if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?'"
To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals' brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.
One of the two rodents was designated as the "encoder" animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this "encoder" rat pressed the right lever, a sample of its brain activity that coded its behavioural decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the "decoder" animal.
The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.
The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of around 70%, only slightly below the possible maximum success rate of 78 percent that the researchers had theorised was achievable based on success rates of sending signals directly to the decoder rat's brain.
Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a "behavioural collaboration" between the pair of rats.
"We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behaviour to make it easier for its partner to get it right," Nicolelis said. "The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward."
In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.
The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.
To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.
"So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate," said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. "This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations."
Nicolelis added, "These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device. So basically, we are creating an organic computer that solves a puzzle."
"But in this case, we are not inputting instructions, but rather only a signal that represents a decision made by the encoder, which is transmitted to the decoder's brain which has to figure out how to solve the puzzle. So, we are creating a single central nervous system made up of two rat brains," said Nicolelis. He pointed out that, in theory, such a system is not limited to a pair of brains, but instead could include a network of brains, or "brain-net." Researchers at Duke and at the ELS-IINN are now working on experiments to link multiple animals cooperatively to solve more complex tasks.
"We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves," continued Nicolelis. Such a connection might even mean that one animal would incorporate another's sense of "self," he said.
"In fact, our studies of the sensory cortex of the decoder rats in these experiments showed that the decoder's brain began to represent in its tactile cortex not only its own whiskers, but the encoder rat's whiskers, too. We detected cortical neurons that responded to both sets of whiskers, which means that the rat created a second representation of a second body on top of its own." Basic studies of such adaptations could lead to a new field that Nicolelis calls the "neurophysiology of social interaction."
Such complex experiments will be enabled by the laboratory's ability to record brain signals from 2,000 brain cells at once. The researchers hope to record the electrical activity produced simultaneously by 10-30,000 cortical neurons in the next five years. These massive brain recordings will enable more precise control of motor neuroprostheses — such as those being developed by the Walk Again Project — to restore motor control to paralysed people, Nicolelis said.
The Walk Again Project recently received a $20 million grant from FINEP, a Brazilian research funding agency, to allow the development of the first brain-controlled, whole-body exoskeleton for restoring mobility in severely paralysed patients. Demonstration of this technology is scheduled for the opening game of the 2014 Soccer World Cup in Brazil.
Researchers at Northwestern University have developed the first stretchable lithium-ion battery – a device that could power a new generation of flexible electronics.
The power and voltage of the stretchable battery are similar to a conventional lithium-ion battery of the same size, but the flexible battery can stretch up to 300 percent of its original size and still function. It can work for eight to nine hours before it needs recharging, which can be done wirelessly.
The potential applications are diverse and may include wearable computers, or even implantable electronics, that could monitor everything from brain waves to heart activity – succeeding where flat, rigid batteries would fail.
Professor Yonggang Huang, who led the portion of the research focused on theory, design and modeling: "We start with a lot of battery components side by side in a very small space, and connect them with tightly packed, long wavy lines. These wires provide the flexibility. When we stretch the battery, the wavy interconnecting lines unfurl, much like yarn unspooling. And we can stretch the device a great deal and still have a working battery."
Last year, Google announced "Project Glass" – a research and development program which aims to prototype and build an augmented reality (AR) head-mounted display. The project's intended purpose was to allow hands-free displaying of information currently found on smartphones, while providing interaction with the Internet via natural language voice commands, in a manner similar to the iPhone application Siri.
Developers were given early access to the device for $1,500, with a consumer version expected in 2014. New details have now emerged on the company's website, including this video which shows the glasses in action. The search giant is offering trials of the product to "bold, creative individuals" and wants people to suggest ways in which they would make use of the headset.
New research from Indiana University has found that machine learning – the same computer science discipline that helped create voice recognition systems, self-driving cars and credit card fraud detection systems – can drastically improve both the cost and quality of health care in the United States.
Using an artificial intelligence framework, combining Markov Decision Processes and Dynamic Decision Networks, IU School of Informatics and Computing researchers Casey Bennett and Kris Hauser show how simulation modeling that understands and predicts the outcomes of treatment could reduce healthcare costs by over 50 percent while also improving patient outcomes by nearly 50 percent.
The work by Hauser, assistant professor of computer science, and PhD student Bennett improves upon their earlier work that showed how machine learning could determine the best treatment at a single point in time for an individual patient.
By using a new framework that employs sequential decision-making, the previous single-decision research can be expanded into models that simulate numerous alternative treatment paths out into the future; maintain beliefs about patient health status over time even when measurements are unavailable or uncertain; and continually plan/re-plan as new information becomes available. In other words, it can "think like a doctor."
"The Markov Decision Processes and Dynamic Decision Networks enable the system to deliberate about the future, considering all the different possible sequences of actions and effects in advance, even in cases where we are unsure of the effects," Bennett said.
Moreover, the approach is non-disease-specific – it could work for any diagnosis or disorder – simply by plugging in the relevant information.
The new work addresses three vexing issues related to health care in the U.S.:
Rising costs, expected to reach 30 percent of GDP by 2050;
Quality of care, where patients receive correct diagnosis and treatment less than half the time on a first visit;
Lag time of 13 to 17 years between research and practice in clinical care.
"We're using modern computational approaches to learn from clinical data and develop complex plans through the simulation of numerous, alternative sequential decision paths," Bennett said. "The framework here easily out-performs the current treatment-as-usual, case-rate/fee-for-service models of health care."
Bennett is also a data architect and research fellow with Centerstone Research Institute, the research arm of Centerstone, the nation's largest not-for-profit provider of community-based behavioral health care. The two researchers had access to clinical data, demographics and other information on over 6,700 patients who had major clinical depression diagnoses, of which about 65 to 70 percent had co-occurring chronic physical disorders like diabetes, hypertension and cardiovascular disease.
Using 500 randomly selected patients from that group for simulations, the two compared actual doctor performance and patient outcomes against sequential decision-making models, all using real patient data. They found great disparity in the cost per unit of outcome change when the artificial intelligence model's cost of $189 was compared to the treatment-as-usual cost of $497.
"This was at the same time that the AI approach obtained a 30 to 35 percent increase in patient outcomes," Bennett said. "And we determined that tweaking certain model parameters could enhance the outcome advantage to about 50 percent more improvement at about half the cost."
While most medical decisions are based on case-by-case, experience-based approaches, there is a growing body of evidence that complex treatment decisions might best be handled through modeling rather than intuition alone.
"Modeling lets us see more possibilities out to a further point, which is something that is hard for a doctor to do," Hauser said. "They just don't have all of that information available to them."
Using the growing availability of electronic health records, health information exchanges, large public biomedical databases and machine learning algorithms, the researchers believe the approach could serve as the basis for personalised treatment through integration of diverse, large-scale data passed along to clinicians at the time of decision-making for each patient. Centerstone alone, Bennett noted, has access to health information on over 1 million patients each year.
"Even with the development of new AI techniques that can approximate or even surpass human decision-making performance, we believe that the most effective long-term path could be combining artificial intelligence with human clinicians," Bennett said. "Let humans do what they do well, and let machines do what they do well. In the end, we may maximise the potential of both."
"Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach" was published recently in Artificial Intelligence in Medicine. The research was funded by the Ayers Foundation, the Joe C. Davis Foundation and Indiana University.
Photonics West – the world's leading photonics, laser, and biomedical optics conference – took place this week in San Francisco. During the event, a German company called Nanoscribe GmbH presented the world's fastest 3D printer of micro- and nanostructures.
Nanoscribe's latest printer allows the smallest three-dimensional objects – often smaller than the width of a human hair – to be manufactured with minimum time consumption and maximum resolution, using a novel laser lithography method. Replacing conventional electronics with optical circuits of higher performance, its polymer waveguides can reach a data transfer rate of more than 5 terabits (Tb) per second. Printing speed is increased by a factor of about 100, with jobs that previously took several hours now being possible in a matter of minutes.
This huge increase in speed is possible thanks to a galvo mirror system, a technology that is also applied in laser shows and the scanning units of CD and DVD drives. Reflecting a laser beam off the rotating galvo mirrors facilitates rapid and accurate laser focus positioning. This video is shown in real-time:
This ultra-precise fabrication allows feature sizes ranging down to just 100 nanometres (nm). At present, the total area of the scanning field is limited to a few hundred micrometres (μm) due to the optical properties of the focusing objective. Just as floor tiles must be joined precisely, the respective scanning fields must be connected seamlessly and accurately. However, by using a patented autofocus technique and high-precision positioning stages, areas can be extended almost arbitrarily by a so-called stitching process.
Martin Hermatschweiler, the managing director of Nanoscribe GmbH: "We are revolutionising 3D printing on the micrometre scale. Precision and speed are achieved by the industrially established galvo technology. Our product benefits from more than a decade of experience in photonics, a key technology of the 21st century."
Taiwanese company Polytron Technologies is working on a touchscreen smartphone that is almost entirely transparent. Due to its early stage of development, there is no software running yet, but the hardware is mostly in place. In this video from Mobile Geeks, the presenter claims that battery technology is "not even close" to becoming transparent yet. This is not true, however, as demonstrated by a breakthrough at Stanford University in 2011.
RP-VITA, created by iRobot and InTouch Health, enables doctors to provide patient care from anywhere in the world via a telemedicine solution.
US technology firm, iRobot Corp., has announced that its RP-VITA Remote Presence Robot has received 510(k) clearance by the U.S. Food and Drug Administration (FDA) for use in hospitals. RP-VITA is the first autonomous navigation remote presence robot to receive such authorisation.
This new machine is a joint effort between two industry leaders, iRobot and InTouch Health. The robot combines the latest in autonomous navigation and mobility technologies developed by iRobot with state-of-the-art telemedicine and electronic health record integration developed by InTouch Health. RP-VITA allows remote doctor-to-patient consults, ensuring that the physician is in the right place at the right time and has access to the necessary clinical information to take immediate action. The robot has unprecedented ease of use. It maps its own environment and uses an array of sophisticated sensors to autonomously move about a busy space without interfering with people or other objects. Using an intuitive iPad interface, a doctor can visit a patient, and communicate with hospital staff and patients with a single click, regardless of their location.
The FDA clearance specifies that RP-VITA can be used for active patient monitoring in pre-operative, peri-operative and post-surgical settings – including cardiovascular, neurological, prenatal, psychological and critical care assessments and examinations.
RP-VITA is being sold into the healthcare market by InTouch Health as its new flagship remote presence device. iRobot will continue to explore adjacent market opportunities for robots like RP-VITA and the iRobot Ava mobile robotics platform.
Colin Angle, chairman and CEO of iRobot: "FDA clearance of a robot that can move safely and independently through a fast-paced, chaotic and demanding hospital environment is a significant technological milestone for the robotics and healthcare industries. There are very few environments as difficult to maneuver as that of a busy ICU or emergency department. Having crossed this technology threshold, the potential for self-navigating robots in other markets, and for new applications, is virtually limitless."
Yulun Wang, chairman and CEO of InTouch Health: "Remote presence solutions have proven their worth in the medical arena for quite some time. RP-VITA has undergone stringent testing, and we are confident that the robot's ease of use and unique set of capabilities will enable new clinical applications and uses."
Stanford Engineering's Center for Turbulence Research (CTR) has set a new record in computational science by successfully using a supercomputer with more than 1 million computing cores. This was done to solve a complex fluid dynamics problem – the prediction of noise generated by a supersonic jet engine.
Joseph Nichols, a research associate in the centre, worked on the newly installed Sequoia IBM Bluegene/Q system at Lawrence Livermore National Laboratories (LLNL). Sequoia recently topped the list of the world's most powerful supercomputers, boasting 1,572,864 compute cores (processors) and 1.6 petabytes of memory connected by a high-speed five-dimensional torus interconnect.
Because of Sequoia's impressive numbers of cores, Nichols was able to show for the first time that million-core fluid dynamics simulations are possible – and also to contribute to research aimed at designing quieter aircraft engines.
The physics of noise
The exhausts of high-performance aircraft at takeoff and landing are among the most powerful man-made sources of noise. For ground crews, even for those wearing the most advanced hearing protection available, this creates an acoustically hazardous environment. To the communities surrounding airports, such noise is a major annoyance and a drag on property values.
Understandably, engineers are keen to design new and better aircraft engines that are quieter than their predecessors. New nozzle shapes, for instance, can reduce jet noise at its source, resulting in quieter aircraft.
Predictive simulations – advanced computer models – aid in such designs. These complex simulations allow scientists to peer inside and measure processes occurring within the harsh exhaust environment that is otherwise inaccessible to experimental equipment. The data gleaned from these simulations are driving computation-based scientific discovery as researchers uncover the physics of noise.
More cores, more challenges
Parviz Moin, a Professor in the School of Engineering and Director of CTR: "Computational fluid dynamics (CFD) simulations, like the one Nichols solved, are incredibly complex. Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed."
CFD simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be.
And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.
Ironing out the wrinkles
Over the past few weeks, Stanford researchers and LLNL computing staff have been working closely to iron out these last few wrinkles. This week, they were glued to their terminals during the first "full-system scaling" to see whether initial runs would achieve stable run-time performance. They watched eagerly as the first CFD simulation passed through initialisation then thrilled as the code performance continued to scale up to and beyond the all-important one-million-core threshold, and as the time-to-solution declined dramatically.
"These runs represent at least an order-of-magnitude increase in computational power over the largest simulations performed at the Center for Turbulence Research previously," said Nichols. "The implications for predictive science are mind-boggling."
The current simulations were a homecoming of sorts for Nichols. He was inspired to pursue a career in supercomputing as a high-school student when he attended a two-week summer program at Lawrence Livermore computing facility in 1994 sponsored by the Department of Energy. Back then, he worked on the Cray Y-MP, one of the fastest supercomputers of its time. "Sequoia is approximately 10 million times more powerful than that machine," Nichols noted.
The Stanford ties go deeper still. The computer code used in this study is named CharLES and was developed by former Stanford senior research associate, Frank Ham. This code utilises unstructured meshes to simulate turbulent flow in the presence of complicated geometry.
In addition to jet noise simulations, Stanford researchers are using the CharLES code to study advanced-concept scramjet propulsion systems, used in hypersonic flight at many times the speed of sound.
At the Consumer Electronics Show (CES) in Las Vegas, a company called Tactus has been demonstrating the world's first fully-integrated dynamic touchscreen tablet.
This 7" device showcases the company's tactile touchscreen technology and introduces an entirely new category of product made possible through its Tactus Morphing Tactile™ surface.
By enhancing both function and usability with Tactus, it is now possible to merge the essential capabilities of smartphones, tablets and laptops through a true physical interface. In a world of flat, static devices, Tactus aims to bring new life to touchscreens by enabling real, physical buttons that rise up from a screen's surface on demand, then disappear back into the screen, leaving a flat, transparent surface when no longer needed.
With normal touchscreens, input errors increase and typing speed decreases for most users. It can also be difficult to know when a "button" was pushed on a completely flat screen, plus there are no orientation cues to guide fingers to the right location. Tactus can solve this problem.
An earlier prototype was seen at last year's Society for Information Display (SID) conference. Since then, a number of design improvements have been made – which include a new coating material to reduce glare, a reduction in the controller's size by 70 percent and a doubling of the speed at which the keyboard activates.
The Consumer Electronics Show (CES) – the biggest technology exhibition of the year – is currently underway in Las Vegas. Among the companies present is Sharp, which has just released a video exploring the future possibilities of "IGZO", a new semiconducting material that has already begun to appear in its products.
IGZO stands for "Indium Gallium Zinc Oxide" and is used as the channel for a transparent thin-film transistor. It replaces amorphous silicon for the active layer of an LCD screen, and, with 40 times higher electron mobility than amorphous silicon, allows either smaller pixels (for screen resolutions higher than HDTV), or much higher reaction speeds for a screen. It is ultra-responsive to touch, drastically minimising the noise caused during touch input. This allows for quick, easy and more natural-feeling writing and smooth lines. It is also far more energy efficient, maintaining onscreen data for a period of time without refreshing the data, even when the current is off.
Sharp is the first company to successfully mass produce IGZO. In April 2012, it was announced that they would be producing bulk volumes of 32-inch 3840×2160, 10-inch 2560×1600 and 7-inch 1280x800 panels. In addition to IGZO, Sharp is showcasing a range of other next-generation TVs and devices – including its 2013 AQUOS® LED TV lineup, featuring the world's biggest LED TV (90" diagonal).
Toshi Osawa, the CEO and Chairman of Sharp: "Whether in your home or in your hand, display technology is everywhere. From game changing IGZO, to stunning Ultra HD products, and large screen televisions, the introductions we are making at CES 2013 will advance people's lives at home, work and everywhere in between."
A new computer being developed at the Massachusetts Institute of Technology can display interactive images on any surface, just by screwing into a light socket.
The team behind the device – led by student Natan Linder – aims to create "a new form factor for a compact and kinetic projected augmented reality interface."
LuminAR combines a laser pico-projector, camera and wireless computer, with software that can recognise objects and sense when a finger or hand is touching the surface. It also functions as a scanner with built-in wi-fi.
The project was developed through 2010, and demonstrated earlier this year at the CHI Conference on Human Factors in Computing Systems. The team has now released a video of its design evolution and potential commercial applications:
A decade ago, a British philosopher put forth the notion that the universe we live in might in fact be a computer simulation run by our descendants. While that seems far-fetched, perhaps even incomprehensible, a team of physicists at the University of Washington has come up with a potential test to see if the idea holds water.
The concept that humanity might be living in a computer simulation was discussed in a 2003 paper published in Philosophical Quarterly by Nick Bostrom, a philosophy professor at the University of Oxford. In the paper, he argued that at least one of three possibilities must be true:
The human species is likely to go extinct before reaching a “posthuman” stage.
Any posthuman civilisation is very unlikely to run a significant number of simulations of its evolutionary history.
We are almost certainly living in a computer simulation.
He also held that “the belief that there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.”
With current limitations and trends in computing, it will be decades before researchers will be able to run even primitive simulations of the universe. But the UW team has suggested tests that can be performed now, or in the near future, that are sensitive to constraints imposed on future simulations by limited resources.
Currently, supercomputers using a technique called lattice quantum chromodynamics and starting from the fundamental physical laws that govern the universe can simulate only a very small portion of the universe accurately, on the scale of one 100-trillionth of a metre, a little larger than the nucleus of an atom, said Martin Savage, a UW physics professor.
Eventually, more powerful simulations will be able to model on the scale of a molecule, then a cell and even a human being. But it will take many generations of growth in computing power to be able to simulate a large enough chunk of the universe to understand the constraints on physical processes that would indicate we are living in a computer model.
However, Savage said, there are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum.
The supercomputers performing lattice quantum chromodynamics calculations essentially divide space-time into a four-dimensional grid. That allows researchers to examine what is called the strong force, one of the four fundamental forces of nature and the one that binds subatomic particles called quarks and gluons together into neutrons and protons at the core of atoms.
“If you make the simulations big enough, something like our universe should emerge,” Savage said. Then it would be a matter of looking for a “signature” in our universe that has an analog in the current small-scale simulations.
Savage and colleague Silas Beane of the University of New Hampshire, who collaborated while at the UW’s Institute for Nuclear Theory, and Zohreh Davoudi, a UW physics graduate student, suggest that the signature could show up as a limitation in the energy of cosmic rays.
In a paper they have posted on arXiv, an online archive for preprints of scientific papers in a number of fields, including physics, they say that the highest-energy cosmic rays would not travel along the edges of the lattice in the model, but would travel diagonally, and they would not interact equally in all directions as they otherwise would be expected to do.
“This is the first testable signature of such an idea,” Savage said.
If such a concept turned out to be reality, it would raise other possibilities as well. For example, Davoudi suggests that if our universe is a simulation, then those running it could be running other simulations as well, essentially creating other universes parallel to our own.
“Then the question is, ‘Can you communicate with those other universes if they are running on the same platform?’” she said.