future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
     
 
       
   
 
 

Blog » AI & Robotics

 
     
 

27th April 2017

AI uses machine learning to mimic human voices

A Canadian startup company has developed a new algorithm capable of replicating any human voice, based on only a 60 second audio sample.

 

 

 

Montreal-based startup, Lyrebird, is named after the ground-dwelling Australian bird which has the ability to mimic natural and artificial sounds from its surrounding environment. The company has this week unveiled a new voice-imitation algorithm that can mimic a person's speech and have it read any text with a given emotion, based on the analysis of just a few dozen seconds of audio recording. In the sample above, a recreation of Barack Obama can be heard alongside Donald Trump and Hillary Clinton.

Lyrebird claims this innovation can take AI software a step further by offering new speech synthesis solutions to developers. Users will be able to generate entire dialogues with the voice of their choice, or design from scratch completely new and unique voices tailored for their needs. Suited to a wide range of applications, the algorithm can be used for personal assistants, reading of audio books with famous voices, speech synthesis for people with disabilities, connected devices of any kind, animated movies or video game characters.

Lyrebird relies on deep learning models developed at the MILA lab in the University of Montréal, where its three founders are currently PhD students: Alexandre de Brébisson, Jose Sotelo and Kundan Kumar. The startup is advised by three of the most prolific professors in the field: Pascal Vincent, Aaron Courville and Yoshua Bengio. The latter, director of the MILA and AI pioneer, wants to make Montréal a world leader in artificial intelligence and this new startup is part of that vision.

While the quality and flow may seem a little distorted in the above clip, the overall recreation is uncanny. Given how quickly information technology tends to improve these days, even better versions with near-perfect mimicry will surely emerge within the next few years. The implications are both amusing and, at the same time, rather alarming: when combined with real-time face capture software, such as Face2Face, it could be relatively easy to depict famous people making statements they never actually said in the real world.

"The situation is comparable to Photoshop," says de Brébisson. "People are now aware that photos can be faked. I think in the future, audio recordings are going to become less and less reliable [as evidence]."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

10th April 2017

Robotics breakthrough could lead to fully automated warehouses

RightHand Robotics, a startup company near Boston, has unveiled a new automated picking solution for warehouses that can recognise and retrieve individual items from boxes.

 

 

 

With an ongoing explosion in e-commerce – combined with a shrinking workforce – pressures have never been higher on warehouses to fulfil orders faster and more efficiently. To address these challenges, a new startup company has developed "RightPick", a combined hardware and software solution that handles the key task of picking individual items, or "piece-picking."

RightHand Robotics (RHR), the developers of RightPick, are based in Massachusetts, USA. The team was formed by a collaboration between researchers from Harvard's Biorobotics Lab, the Yale Grab Lab, and MIT, focused on groundbreaking research into grasping systems, intelligent hardware sensors, computer vision and applied machine learning.

Unlike traditional factory robots that can be complex to setup and with fixed uses, RHR create machines that are simple to integrate and highly flexible. The new system they have demonstrated in the video above can automate a task that robots have previously struggled to master: recognising and retrieving individual items from boxes; up to 600 per hour. This core competency represents a significant advance towards fully automated warehouses that could remove the need for humans. As e-commerce continues to grow, the trend is away from bulk or pallet-load handling, toward single SKUs and piecemeal items.

"The supply chain of the future is more about pieces than pallets," says Leif Jentoft, a co-founder of RHR. "RightHand can help material handling, third-party logistics and e-commerce warehouses lower costs by increasing automation."

RightPick is capable of handling thousands of different items, using a machine learning backend, coupled with a sensorised robot hand that works in concert with all industry-leading robotic arms. The time to value of RightPick can be demonstrated in a matter of hours, as it offers rapid setup, remote support, and easy integration. The machine is able to quickly demonstrate value in a wide variety of workflows – such as sorting batch-picked items, picking items from Automatic Storage and Retrieval Systems (ASRS), inducting items to a unit sorter, order quality assurance, and more. RHR also announced that it has raised $8 million in Series A funding from various companies and angel investors.

 

future timeline robots
Credit: RightHand Robotics

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

6th April 2017

Law requires reshaping as AI and robotics alter employment, states new report

The present wave of automation – driven by rapid advances in artificial intelligence (AI) – is creating a gap between current legislation and the new laws necessary for an emerging workplace reality, states the International Bar Association (IBA).

 

future timeline technology singularity ai

 

"Certainly, technological revolution is not new, but in past times it has been gradual. What is new about the present revolution is the alacrity with which change is occurring, and the broadness of impact being brought about by AI and robotics," says Gerlind Wisskirchen, Vice Chair for Multinationals in the IBA's Global Employment Institute (GEI) and coordinator of a major report, Artificial Intelligence and Robotics and Their Impact on the Workplace. "Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI, and the legislation once in place to protect the rights of human workers may be no longer fit for purpose, in some cases."

The IBA, established in 1947, is the world's leading organisation of international legal practitioners, bar associations and law societies. Through its global membership of individual lawyers, law firms, bar associations and law societies, it influences the development of international law reform and shapes the future of the legal profession throughout the world.

"The AI phenomenon is on an exponential curve, while legislation is doing its best on an incremental basis," adds Wisskirchen. "New labour and employment legislation is urgently needed to keep pace with increased automation."

 

future timeline technology singularity humanity ai

 

The comprehensive, 120-page report focuses on potential future trends of AI, and the likely impact intelligent systems will have on: the labour market, the structures of companies, employees' working time, remuneration and the working environment. In addition to illustrating the thread and importance of law in relation to these areas, the GEI assesses the law at different points in the automation cycle – from the developmental stage, when the computerisation of an industry begins, to what workers may experience as AI becomes more prevalent, through to issues of responsibility when things go wrong. These components are not examined in isolation, but in the context of economics, business and social environments.

In the example of the automotive industry, the report identifies competitive disadvantage between Europe and the United States in the developmental stage of autonomous driving. Germany and the US are recognised as the market leaders in this area. However, in contrast to the US, European laws prevent autonomous driving on public roads, though there are some exceptions for research vehicles. US companies are not faced with the same restrictions; they are therefore able to develop at a faster pace and as a result are likely to bring products to market sooner than their European competitors. Europe's restrictive older regulations impede technical progress of autonomous driving for companies operating within its borders, potentially placing them at a disadvantage in the marketplace.

Since motor vehicles will be driven by fully automated systems in the future, it is conceivable that jobs such as truck, taxi, or forklift drivers will be eliminated in the long run. The report states there is a 90% likelihood of this happening, with developers of connected trucks stating: "technical changes that will take place in the next 10 years will be more dramatic than the technical advancements over the last 50 or 60 years". The report points to cost savings of nearly 30% as logistics become cheaper, more reliable and more flexible. At the fully automated stage, costs will be further reduced as the requirement for rest breaks is eliminated, illness or inebriation is no longer a risk factor, and accidents are minimised.

 

self driving truck future timeline

 

Nevertheless, the report examines the issue of liability when failure does occur, concluding that: "The liability issues may become an insurmountable obstacle to the introduction of fully automated driving." Currently, driver responsibility is assumed in most cases, with the manufacturer liable only for product defects, and vehicle owners subject to special owner's liability, particularly in European countries. However, if a vehicle is fully automated, with a human driver no longer actively steering, the question arises as to whether damage can still be attributed to the driver or the owner of the car, or whether only the manufacturer of the system can be held liable.

The report's authors examine whether rules applicable to other automated areas, such as aviation, can be applied, but reason that: "it is not possible to apply the liability rules from other automated areas to automated driving", and that international liability standards with clear rules are needed.

Pascale Lagesse, Co-Chair of the IBA GEI, commented: "Without a doubt – AI, robotics and increased automation will bring about changes in society at every level, in every sector and in every nation. This fourth industrial revolution will concurrently destroy and create jobs and paradoxically benefit and impair workers in ways that are not entirely clear, or not yet imagined. What is evident, however, is that a monumental paradigm shift is occurring and that concurrent legal uncertainties need to be addressed within labour and employment laws geared to the technological developments."

She added: "Greater governmental collaboration across borders may be necessary if commerce is to thrive. States as lawmakers will have to be bold in decision, determining what jobs should be performed exclusively by humans, for example: caring for babies; perhaps introducing human quotas in different sectors; taxing companies where machines are used; and maybe introducing a 'made by humans' label for consumer choice. Our new report posits these ideas and more, and could not be more timely."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

5th March 2017

AI beats top human players at poker

The University of Alberta has announced details of DeepStack, a new artificial intelligence program able to beat professional human players at poker for the first time.

 

deepstack poker ai

 

In 1952, Professor Sandy Douglas created a tic-tac-toe game on the EDSAC, a room-sized computer at the University of Cambridge. One of the first ever computer games, it was developed as part of a thesis on human-computer interaction. Forty-five years later, in 1997, another milestone occurred when IBM's Deep Blue machine defeated Garry Kasparov, the world chess champion. This was followed by Watson, again created by IBM, which appeared on the Jeopardy! game show and beat the top human players in 2011. Yet another breakthrough was Google's DeepMind AlphaGo, which in 2016 defeated the Go world champion Lee Se-dol at a tournament in South Korea.

Now, for the first time ever, an artificial intelligence program has beaten human professional players at heads-up, no-limit Texas hold 'em, a variation of the card game of poker. This historic result in AI has implications far beyond the poker table – from helping to make more decisive medical treatment recommendations to developing better strategic defence planning.

DeepStack has been created by the University of Alberta's Computer Poker Research Group. It bridges the gap between games of "perfect" information – like in checkers, chess, and Go, where both players can see everything on the board – and "imperfect" information games, by reasoning while it plays, using "intuition" honed through deep learning to reassess its strategy with each decision.

"Poker has been a long-standing challenge problem in artificial intelligence," said computer scientist Michael Bowling, principal investigator on the study. "It's the quintessential game of imperfect information, in the sense that players don't have the same information or share the same perspective while they're playing."

Artificial intelligence researchers have long used parlour games to test their theories because the games are mathematical models that describe how decision-makers interact.

"We need new AI techniques that can handle cases where decision-makers have different perspectives," said Bowling. "Think of any real-world problem. We all have a slightly different perspective of what's going on, much like each player only knowing their own cards in a game of poker."

 

deepstack poker ai

 

This latest discovery builds on previous research findings about artificial intelligence and imperfect information games stretching back to the creation of the Computer Poker Research Group in 1996. DeepStack extends the ability to think about each situation during play to imperfect information games using a technique called continual re-solving. This allows the AI to determine the correct strategy for a particular poker situation by using its "intuition" to evaluate how the game might play out in the near future, without thinking about the entire game.

"We train our system to learn the value of situations," said Bowling. "Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game."

Thinking about each situation as it arises is important for complex problems like heads-up no-limit hold'em, which has more unique situations than there are atoms in the universe, largely due to players' ability to wager different amounts including the dramatic "all-in." Despite the game's complexity, DeepStack takes action at human speed – with an average of only three seconds of "thinking" time – and runs on a simple gaming laptop.

To test the approach, DeepStack played against a pool of professional human players recruited by the International Federation of Poker. A total of 33 players from 17 countries were asked to play in a 3,000-hand match, over a period of four weeks. DeepStack beat each of the 11 players who finished their match, with only one outside the margin of statistical significance.

A paper on this study, DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker, is published in the journal Science.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

28th February 2017

"Handle" – the latest robot from Boston Dynamics

Boston Dynamics, the engineering and robotics company best known for the development of BigDog, has revealed its latest creation. "Handle" is a research robot that stands 6.5 ft tall, travels at 9 mph and jumps 4​ ​feet vertically. ​It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles on one battery charge. ​​​Handle uses many of the same dynamics, balance and mobile manipulation principles​ found in previous quadruped and biped robots built by the company, but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs, Handle can have the best of both worlds.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th February 2017

Types of Artificial Intelligence

This is a guest piece by forum member Yuli Ban.

 

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on the Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000. So my system is this:

• Weak Narrow AI
• Strong Narrow AI
• Weak General AI
• Strong General AI
• Super AI

 


 

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analogue mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.

We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not artificial general intelligence.

I didn’t use that mention of analogue mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need SAI to run a Word document. Maybe SAI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.

Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses is the absolute bottom of biological intelligence.

WNAI can basically do one thing really well, but cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

 

pocket calculator ai artificial intelligence

 

 

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand.

SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognise speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognise some of your favourite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favourite band of its own. It was programmed to know these things, based on your own preferences.

Even if Siri says it’s “not an AI”, it’s only using pre-programmed responses to say so. SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

 

strong narrow AI
Credit: ymgerman

 

 

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using a differentiable neural computing (DNC) system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.

Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”

DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant (VA) to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?

If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.

WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong general AI and the only thing we lack is the proper power and training.

 

weak general ai

 

 

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or SAI.

Right now, we have no analogue to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code, while SGAI is us building a whole digital brain.

Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if need be if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

 

strong general artificial intelligence ai

 

 

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.

The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.

Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.

That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5'8" primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.

An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form. Basically, the moment you create SGAI is the moment you create ASI. From that bit of information, you can begin to understand what AI will be capable of achieving.

 

superintelligent ai

 

 


 

Recap:
“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.

Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.

Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any pre-programming. 

All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn't mean it's now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good as, or better than any human. It has sapience. SGAI may be very humanlike, but it's ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It's fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

31st January 2017

Cafe X Technologies launches robotic cafe

Cafe X Technologies, a new startup based in San Francisco, yesterday opened its first robotic cafe in the U.S. By combining machine learning and robotics, it aims to eliminate the variabilities that bog down today’s coffee experience.

 

cafe x technologies robotic arm
Credit: Cafe X Technologies

 

Cafe X Technologies, in partnership with WMF (a leading international maker of coffee machines), has developed a fully automated robotic cafe that integrates hardware and software to blend the functionality of baristas with specialty coffee preparation methods. Cafe X sets itself apart by removing the on-site wait time, the potential for preparation error, and other unexpected variability – which it claims will set a new standard for automation technology and the specialty coffee service industry.

“I’ve long been a big coffee consumer and there’s never a guaranteed seamless experience,” says founder and CEO, 23-year-old Henry Hu. “In today’s world, you have two options for getting a cup of coffee: you’re either in and out with something subpar or you’re waiting in a 15-minute line for a great cappuccino. I started Cafe X to eliminate that inherent compromise and give people access to a tasty cup of coffee consistently and conveniently.”

Customers can order customised espresso-based beverages on the spot at the ordering kiosk, or they can download the Cafe X app onto their mobile device to order in advance. Once the beverage is ready, they use touch screens on the cafe to type in a 4-digit order number which is either sent via text message or displayed on the Cafe X mobile app for iOS and Android. The Mitsubishi robot arm will then identify the customer’s drink from the waiting stations and deliver it to them within seconds.

 

cafe x technologies robotic arm
Credit: Cafe X Technologies

 

The machine is very fast – capable of preparing up to 120 drinks per hour, depending on the complexity of the orders. Customers can choose the brand of beans and customise the amount of milk and flavours used.

“This won’t replace baristas, or the coffee shop experience that so many people have come to love – we don’t aim to do that,” says Hu. “What we’re offering is the best possible experience for people who are looking for consistent specialty coffee to-go.”

Cafe X has formed unique partnerships with local roasters to source specific premium ingredients and create unique drinks which are programed into the automated coffee systems. The menu prices start at $2.25 for an 8 oz cup and will vary depending on the customer’s coffee bean selection which includes single origin options.

“There’s an entire segment of the consumer population that’s not buying coffee because it doesn’t fit into their present moment,” says John Laird, CEO of AKA Roasters. “We’re truly excited to partner with the Cafe X team to expand to those customers we might not otherwise reach.”

This machine has been installed at San Francisco's Metreon shopping centre. Hu is now in talks with a number of San Francisco-based tech companies to install Cafe X kiosks in their offices.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

27th January 2017

AI matches humans on standard visual intelligence test

Researchers at Northwestern University have developed an AI system that performs at human levels on a standard visual intelligence test.

 

 

ai vs human visual intelligence test
An example question from the Raven's Progressive Matrices standardised test. The test taker should choose answer D, because the relationships between it and the other elements in the bottom row are most similar to the relationships between the elements of the top rows.

 

A team at Northwestern University has developed a computational model that performs at human levels on a standard visual intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

"The model performs in the 75th percentile for American adults, making it better than average," said Professor Ken Forbus, who holds a PhD in Artificial Intelligence from MIT. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."

The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but could also potentially shrink the gap between computer and human cognition.

The new computational model is built on CogSketch, an AI platform developed in Forbus's laboratory that can solve visual problems and understand sketches by using a process of analogy. He developed the system with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research is published this month in the journal Psychological Review.

While Forbus and Lovett's system can be used to model general visual problem-solving phenomena, they specifically tested it on Raven's Progressive Matrices, a nonverbal standardised test that measures abstract reasoning. All of the test's problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix.

"The Raven's test is the best existing predictor of what psychologists call 'fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,'" said Lovett, now a researcher at the US Naval Research Laboratory. "Our results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence."

The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as "the clock is above the door" or "pressure differences cause water to flow." These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.

"Most artificial intelligence research today concerning vision focuses on recognition, or labelling what is in a scene rather than reasoning about it," Forbus said. "But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

24th January 2017

EU considers “electronic personhood” for robots

The European Parliament has proposed a new legal framework to govern the rapidly evolving fields of robotics and artificial intelligence (AI).

 

human robot hands electronic persons personhood
© AP Images/European Union-EP

 

The European Parliament's Legal Affairs Committee has voted by a majority of 17 votes to two, with two abstentions, to create a robot "bill of rights" covering a range of issues relating to automation and machine intelligence.

“A growing number of areas of our daily lives are increasingly affected by robotics,” said rapporteur Mady Delvaux. “In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework.”

Delvaux's report looks at robotics-related issues such as liability, safety and changes in the labour market. Members of the European Parliament (MEPs) have urged the Commission to consider creating a European agency for robotics and artificial intelligence to supply public authorities with technical, ethical and regulatory expertise. They also propose a voluntary ethical conduct code to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards.

For example, this code should recommend that robot designers include “kill” switches so that robots can be turned off in emergencies, they add. Harmonised rules are especially urgently needed for self-driving cars. They call for an obligatory insurance scheme and a fund to ensure victims are fully compensated in cases of accidents caused by driverless cars.

In the longer-term, the possibility of creating a specific legal status of “electronic persons” for the most sophisticated autonomous robots – so as to clarify responsibility in cases of damage – should also be considered, the MEPs say. The development of robotics is likely to have big societal changes, including the loss of jobs in certain fields, says the text. It urges the Commission to follow these trends closely, including new employment models and the viability of the current tax and social system for robotics. The full house will vote on the draft proposals in February, which will need to be approved by absolute majority according to the legislative initiative procedure.

From warehouse machines to surgical assistance devices, there are now over 1.7 million robots already in existence worldwide. On current trends, industrial and personal service robots could outnumber humans by the 2040s. But despite their rapidly increasing numbers and abilities, their use is still not properly regulated.

"There is a possibility that – within the space of a few decades – AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species," the Committee's report states.

 

future robot 2040 technology timeline

 

“We are not talking about weapons,” says Delvaux. “We define robots as physical machines, equipped with sensors and interconnected so they can gather data. The next generation of robots will be more and more capable of learning by themselves. The most high-profile ones are self-driving cars – but they also include drones, industrial robots, care robots, entertainment robots, toys, robots in farming.

“When self-learning robots arise, different solutions will become necessary and we are asking the Commission to study options. One could be to give robots a limited 'e-personality' [comparable to 'corporate personality', a legal status which enables firms to sue or be sued] at least where compensation is concerned. It is similar to what we now have for companies, but it is not for tomorrow. What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years.”

So in the meantime, who should be responsible in case of damage? The owner, the manufacturer, the designer, or the programmer?

“We have two options,” Delvaux continues. “According to the principle of strict liability it should be the manufacturer who is liable, because he is best placed to limit the damage and deal with providers. The other option is a risk assessment approach, according to which tests have to be carried out beforehand and compensation has to be shared by all stakeholders. We also propose there should be compulsory insurance, at least for the big robots.”

Delvaux's report also mentions that some vulnerable people can become emotionally attached to their care robots. How can we prevent this happening?

“We always have to remind people that robots are not human, and will never be,” she says. “Although they might appear to show empathy, they cannot feel it. We do not want robots like they have in Japan, which look like people. We have proposed a charter setting out that robots should not make people emotionally dependent on them. You can be dependent on them for physical tasks – but you should never think that a robot loves you or feels your sadness.”

People who fear they will lose their jobs are told that robots will actually create new jobs. However, they might only create roles for highly-skilled people and replace low-skilled workers. How can this be solved?

“I believe this is the biggest challenge to our society and to our educational systems,” she adds. “We do not know what will happen. I believe there will always be low skilled jobs. Robots will not replace humans; there will be a cooperation between both. We ask the Commission to look at the evolution, what kind of tasks will be taken over by robots. It can be a good thing if they are used for hard work. For example, if you have to carry heavy goods or if the job is dangerous. We have to monitor what is happening and then we have to be prepared for every scenario.”

The report also deals with the issue of whether we should change our social security systems and think about a universal basic income (UBI), because if there are huge numbers of permanently unemployed people, we have to ensure they can have a decent life.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

6th January 2017

IBM predicts five innovations for the next five years

IBM has unveiled its annual "5 in 5" – a list of ground-breaking innovations that will change the way people work, live, and interact during the next five years.

 

 

 

In 1609, Galileo invented the telescope and saw our cosmos in an entirely new way. He proved the theory that the Earth and other planets in our Solar System revolve around the Sun, which until then was impossible to observe. IBM Research continues this work through the pursuit of new scientific instruments – whether physical devices or advanced software tools – designed to make what's invisible in our world visible, from the macroscopic level down to the nanoscale.

"The scientific community has a wonderful tradition of creating instruments to help us see the world in entirely new ways. For example, the microscope helped us see objects too small for the naked eye, and the thermometer helped us understand the temperature of the Earth and human body," said Dario Gil, vice president of science & solutions at IBM Research. "With advances in artificial intelligence and nanotechnology, we aim to invent a new generation of scientific instruments that will make the complex invisible systems in our world today visible over the next five years."

Innovation in this area could dramatically improve farming, enhance energy efficiency, spot harmful pollution before it's too late, and prevent premature physical and mental decline. IBM's global team of scientists and researchers is steadily bringing these inventions from laboratories into the real world.

The IBM 5 in 5 is based on market and societal trends, as well as emerging technologies from research labs around the world that can make these transformations possible. Below are the five scientific instruments that will make the invisible visible in the next five years.

 


 

With AI, our words will open a window into our mental health

In five years, what we say and write will be used as indicators of our mental health and physical well-being. Patterns in our speech and writing analysed by new cognitive systems – including meaning, syntax and intonation – will provide tell-tale signs of early-stage developmental disorders, mental illness and degenerative neurological diseases to help doctors and patients better predict, monitor and track these conditions. What were once invisible signs will become clear signals of patients' likelihood of entering a certain mental state, or how well their treatment plan is working, complementing regular clinical visits with daily assessments from the comfort of their homes.

 

ibm five in five
Credit: IBM

 

 

 

Hyperimaging and AI will give us superhero vision

In five years, new imaging devices using hyperimaging technology and AI will help us "see" beyond visible light, by combining multiple bands of the electromagnetic spectrum. This will reveal valuable insights or potential dangers that may otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and widely accessible in our daily lives, giving us the ability to perceive or see through objects and opaque environmental conditions anytime, anywhere.

A view of invisible, or vaguely visible objects around us, could help make road and traffic conditions clearer for drivers and self-driving cars. For example, by using millimetre wave imaging, a camera and other electromagnetic sensors, hyperimaging technology could help a vehicle see through fog or rain, detect hazardous and hard-to-see road conditions such as black ice, or tell us if there is some object up ahead – as well as its distance and size. Cognitive computing technologies will reason about this data and recognise what might be a tipped over garbage can versus a deer crossing the road or a pot hole that could result in a flat tire.

 

ibm five in five
Credit: Lenovo

 

 

 

Macroscopes will help us understand Earth's complexity in infinite detail

Instrumenting and collecting masses of data from every source in the physical world, big and small, and bringing it together will reveal comprehensive solutions for our food, water and energy needs. Today, the physical world only gives us a glimpse into our highly interconnected and complex ecosystem. We collect exabytes of data – but most of it is unorganised. In fact, an estimated 80 percent of a data scientist's time is spent scrubbing data instead of analysing and understanding what that data is trying to tell us.

Thanks to the Internet of Things (IoT), new sources of data are pouring in from millions of connected objects – from refrigerators, light bulbs and heart rate monitors, to remote sensors such as drones, cameras, weather stations, satellites and telescope arrays. There are already more than six billion connected devices generating tens of exabytes of data per month, with a growth rate of over 30% each year. After successfully digitising information, business transactions and social interactions, we are now in the process of digitising the physical world.

By 2022, we will use machine learning algorithms and software to organise the information about the physical world, bringing the vast and complex data gathered by billions of devices within the range of our vision and understanding. IBM calls this idea a "macroscope" – but unlike microscopes to see the very small, or telescopes that can see far away, this will be a system to gather all of Earth's complex data together to analyse it for meaning.

By aggregating, organising and analysing data on climate, soil conditions, water levels and their relationship to irrigation practices, for example, a new generation of farmers will have insights that help them determine the right crop choices, where to plant them and how to produce optimal yields while conserving precious water supplies.

 

internet of things agriculture

 

 

 

Medical labs "on a chip" will serve as health detectives for tracing disease at the nanoscale

In five years, new medical labs on a chip will serve as nanotechnology health detectives – tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyse a disease that would normally be carried out in a full-scale biochemistry lab.

Lab-on-a-chip technology will eventually be packaged in a handheld device. This will allow people to quickly and regularly measure the presence of biomarkers found in small amounts of bodily fluids – such as saliva, tears, blood and sweat – sending this information securely into the cloud from the comfort of their home. There it will be combined with real-time health data from other IoT-enabled devices, like sleep monitors and smart watches, and analysed by AI systems for insights. Taken together, this data will give an in-depth view of our health, alerting us to the first signs of trouble – helping to stop disease before it progresses.

IBM scientists are developing nanotechnology that can separate and isolate bioparticles down to 20 nanometres in diameter, a scale that gives access to DNA, viruses, and exosomes. These particles could be analysed to potentially reveal the presence of disease even before we have symptoms.

 

IBM five in five
Medical lab on a chip. Credit: IBM

 

 

 

Smart sensors will detect environmental pollution at the speed of light

In five years, new sensing technologies deployed near natural gas extraction wells, around storage facilities, and along distribution pipelines will enable the industry to pinpoint invisible leaks in real-time. Networks of IoT sensors wirelessly connected to the cloud will provide continuous monitoring of natural gas infrastructure, allowing leaks to be found in a matter of minutes instead of weeks, reducing pollution and waste and the likelihood of catastrophic events.

IBM is researching silicon photonics – an emerging technology that transfers data by light, for computing literally at the speed of light. These chips could be embedded in a network of sensors on the ground or within infrastructure, or even fly on autonomous drones; generating insights that, combined with real-time wind data, satellite data, and other historical sources, will produce complex environmental models to detect the origin and quantity of pollutants as they occur.

 

smart sensor
Credit: IBM

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

28th December 2016

Giant robot could patrol military borders

A four metre (13 ft), 1.5-ton bipedal robot, designed and built by military scientists in South Korea, has taken its first steps.

As demonstrated in the video below, "Method-2" can hold a pilot who sits inside the torso and controls its arms and legs, allowing it to walk. Fans of science fiction movies like Aliens and Avatar will notice the similarity with hi-tech machines depicted in those films. The robot is so large and heavy that it shakes the ground nearby when walking.

Its creation has involved the work of 30 engineers at robotics company Hankook Mirae Technology, guided by lead designer Vitaly Bulgarov who has previously worked on films such as Transformers, Robocop and Terminator.

"One of the most common questions we get is about the power source," he said on Facebook. "The company’s short-term goals include developing robotic platforms for industrial areas where having a tethered robot is not an issue. Another short-term, real world application includes mounting only the top part of the robot on a larger wheeled platform – solving the problem of locomotion through an uneven terrain, as well as providing enough room for sufficient power source."

“Our robot is the world’s first manned bipedal robot and is built to work in extreme hazardous areas where humans cannot go (unprotected),” said company chairman Yang Jin-Ho. He has invested 242bn won ($200 million) in the project since 2014 to "bring to life what only seemed possible in movies and cartoons".

The company has already received inquiries from manufacturing, construction, entertainment and other industries. There have even been questions about its possible deployment along the Demilitarised Zone with North Korea. It might also be used for cleaning up disaster sites like Fukushima. However, the machine needs further research and development first to improve its balance and power systems. At present, it remains tethered by a power cable, but if all goes according to plan, it should be able to move more freely within the next couple of years. The price tag for Method-2 will be 10bn won ($8.3 million).

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

27th December 2016

The future of robotics: 10 predictions for 2017 and beyond

Technology research firm International Data Corporation (IDC) has published a report highlighting the key drivers for robotics and how these are likely to accelerate developments from 2017 through 2020.

 

robotics future technology predictions 2017 2020

 

International Data Corporation (IDC) has published a report titled “IDC FutureScape: Worldwide Robotics 2017 Predictions", which highlights the key drivers for robotics and how these are likely to shape the development of technology in the planning horizon of 2017 through 2020.

"Technological development in artificial intelligence, computer vision, navigation, MEMS sensor, and semiconductor technologies continue to drive innovation in the capability, performance, autonomy, ease of use, and cost-effectiveness of industrial and service robots," says Dr. Jing Bing Zhang, Research Director.

Dr. Zhang revealed the strategic top ten predictions and major robotics trends that are set to present both opportunities and challenges to IT leaders during 2017 and beyond.

 

robot fruit picker

 

Prediction 1: Robot as a Service. By 2019, 30% of commercial service robotic applications will be in the form of a "Robot as a Service" business model, reducing costs for robot deployment.

Prediction 2: Chief Robotics Officer. By 2019, 30% of leading organisations will implement a Chief Robotics Officer role and/or define a robotics-specific function within the business.

Prediction 3: Evolving Competitive Landscape. By 2020, companies will have a greater choice of vendors, as new players enter the US$80-billion Information & Communications Technology (ICT) market to support robotics deployment.

Prediction 4: Robotics Talent Crunch. By 2020, robotics growth will accelerate the talent race, leaving 35% of robotics-related jobs vacant, while the average salary increases by at least 60%.

Prediction 5: Robotics Will Face Regulation. By 2019, the Government will begin implementing robotics-specific regulations to preserve jobs and to address concerns of security, safety, and privacy.

 

hospital robot
Automated hospital pharmacy, capable of tracking, preparing and dispensing medication automatically.

 

Prediction 6: Software Defined Robot. By 2020, 60% of robots will depend on cloud-based software to define new skills, cognitive capabilities, and application programs, leading to the formation of a "robotics cloud" marketplace.

Prediction 7: Collaborative Robot. By 2018, 30% of all new robotic deployments will be smart collaborative robots that operate three times faster than today's robots and are safe for work around humans.

Prediction 8: Intelligent RoboNet. By 2020, 40% of commercial robots will become connected to a mesh of shared intelligence, resulting in 200% improvement in overall robotic operational efficiency.

Prediction 9: Growth Outside Factory. By 2019, 35% of leading organisations in logistics, health, utilities, and resources will explore the use of robots to automate operations.

Prediction 10: Robotics for E-commerce. By 2018, 45% of the 200 leading global e-commerce and omni-channel commerce companies will deploy robotics systems in their order fulfilment warehousing and delivery operations.

"Robotics will continue to accelerate innovation, thus disrupting and changing the paradigm of business operations in many industries. IDC expects to see stronger growth of robotics adoption outside the traditional manufacturing factory floor, including logistics, health, utilities and resources industries. We encourage end-user companies to embrace and assess how robotics can sharpen their company's competitive edge by improving quality, increasing operational productivity and agility, and enhancing experiences of all stakeholders," Dr. Zhang concludes.

 

robotic kitchen hands
Robotic kitchen hands. Credit: Moley Robotics

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

 
     
   
« Previous Next »
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed