future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
     
 
       
   
 
 

Blog » AI & Robotics

 
     
 

8th May 2017

Nearly half of jobs in Scotland at risk of automation by 2030

Urgent reform is needed to deal with the rapid rise of automation, a leading Scottish think-tank has said.

 

half of scottish jobs automation 2030

 

Urgent reform is needed to deal with the rapid rise of automation, which threatens nearly half of Scottish jobs by 2030, a leading think-tank has said. The stark warning comes in a new report published by the Institute for Public Policy Research (IPPR) in Scotland, a leading progressive think tank, and supported by the JPMorgan Chase Foundation.

The report, Scotland's Skills 2030, outlines the need to reskill Scotland's workforce for the world of work in the coming decades. With greater numbers of workers working for longer, due to demographic change, and in multiple jobs, multiple careers and for multiple employers, due to technological change, Scotland will need to retrofit the workforce with the skills required to compete in the future.

There are 2.5m working age adults today (78%) that have left compulsory education, that will still be of the working age by 2030, the study notes – adding they are likely to experience significant changes to the economy over this time, and will need support to learn new skills, retrain and upskill.

Meanwhile, just under half (46.1%) of jobs in Scotland, about 1.2m jobs, are at "high risk" of automation over the next couple of decades. The sectors most likely to be affected are transport, manufacturing and retail, the report states. This brings the need for a skills system that is able to work with people in jobs, throughout their careers, rather than solely at the start or before their careers have begun, the researchers warn.

Scotland has a clear gap in training and learning for people who have already started their careers, with a greater focus on younger people, and full-time provision in recent years. Employers are not plugging this gap, and too often pursue a low-skill business model. IPPR Scotland is calling for a new mid-career learning route, called the Open Institute of Technology, to sit alongside apprenticeships and further education, to help train the current workforce to be ready for the future challenges Scotland's economy faces, the report concludes.

Russell Gunson, Director of IPPR Scotland, said: "There are more than 2.5 million people already in the workforce today that will still be working by 2030. There are also 1.2m jobs in Scotland at risk of automation over the same time. Scotland urgently needs to design a skills system better able to work with people already into their careers to help them to retrain, reskill and respond to world of work of 2030.

"Scotland has a really strong record on skills in many ways, and in this report we find that Scotland is the highest skilled nation in the UK. However, our system has a clear gap in that we don't have enough provision for people who have already started their careers, and employers are not investing to fill this gap. To respond to the huge changes facing Scotland around demographic, technological and climate change – and of course Brexit – we're going to have to focus on retrofitting the current workforce to provide them with the skills they need, to deliver the inclusive economic growth we wish to see.

"Our report makes a number of recommendations to help Scotland plot a path through these challenges, to reform the skills system in Scotland, to help to secure an economy that delivers fairness and reduces inequality. Without reform of the skills system we could see changes to the economy harm whole sections of population, and whole communities, leaving many behind."

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

 

  speech bubble Comments »
 

 

 

3rd May 2017

Robot can perform surgeries in one fiftieth of the time

The University of Utah has revealed a new robotic drill system for greatly speeding up surgical procedures. One type of complex cranial surgery could be done in a fiftieth of the normal time, decreasing from two hours to just two and a half minutes.

 

 

 

A computer-driven automated drill, similar to those used to machine auto parts, could play a pivotal role in future surgical procedures. The new machine can make one type of complex cranial surgery 50 times faster than standard procedures, decreasing from two hours to two and a half minutes. Researchers at the University of Utah developed a drill that produces fast, clean and safe cuts – reducing the time the wound is open and the patient is anesthetised, thereby decreasing the incidence of infection, human error, and surgical cost. The findings are reported in Neurosurgical Focus.

To perform complex surgeries – especially cranial surgeries – surgeons typically use hand drills to make intricate openings, adding hours to a procedure: "It was like doing archaeology," said William Couldwell, study author and neurosurgeon at the University of Utah Health. "We had to slowly take away the bone to avoid sensitive structures."

Couldwell saw a need for a device that could alleviate this burden and make the process more efficient: "We knew the technology was already available in the machine world, but no one ever applied it to medical applications."

"My expertise is dealing with the removal of metal quickly, so a neurosurgical drill was a new concept for me," explained A. K. Balaji, associate professor in mechanical engineering. "I was interested in developing a low-cost drill that could do a lot of the grunt work to reduce surgeon fatigue."

 

robot surgery future timeline
Credit: University of Utah

 

The team developed the drill from scratch, as well as new software to calculate the safest cutting path. First, the patient is imaged using CT scans to gather bone data and identify the exact location of sensitive structures, such as nerves, veins and arteries that must be avoided. Surgeons then use this information to program a cutting path for the drill: "The software lets the surgeon choose the optimum path from point A to point B, like Google Maps," says Balaji. In addition, the surgeon can program safety barriers along the cutting path within 1 mm of sensitive structures. "Think of the barriers like a construction zone," says Balaji. "You slow down to navigate it safety."

The translabyrinthine surgery is performed thousands of times a year to expose slow-growing, benign tumours that can form at auditory nerves. This cut must avoid several sensitive features, including facial nerves and the venous sinus, a large vein that drains blood from the brain. Risks of this surgery include loss of facial movement. The system developed at Utah has an automatic emergency shut-off switch. During surgery, facial nerves are monitored for any signs of irritation: "If the drill gets too close to the facial nerve and irritation is monitored, the drill automatically turns off," says Couldwell.

The new drill could reduce the duration of this complex procedure from two hours for hand-drilling by an experienced surgeon to two and a half minutes. The shorter surgery is expected to lower the chance of infection and improve post-operative recovery. It also has potential to substantially reduce the cost of surgery, because it shaves hours from operating room time.

The team has now demonstrated the safety and speed of the drill by performing this complex cut – but Couldwell stresses that it can be applied to many other procedures: "This drill can be used for a variety of surgeries, like machining the perfect receptacle opening in the bone for a hip implant," he said.

The varied application of the drill highlights another factor that drew Balaji to the project: "I was motivated by the fact that this technology could democratise health care by levelling the playing field so more people can receive quality care," he said. The team is now examining opportunities to commercialise the drill to ensure that it is more widely available for other surgical procedures.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

27th April 2017

AI uses machine learning to mimic human voices

A Canadian startup company has developed a new algorithm capable of replicating any human voice, based on only a 60 second audio sample.

 

 

 

Montreal-based startup, Lyrebird, is named after the ground-dwelling Australian bird which has the ability to mimic natural and artificial sounds from its surrounding environment. The company has this week unveiled a new voice-imitation algorithm that can mimic a person's speech and have it read any text with a given emotion, based on the analysis of just a few dozen seconds of audio recording. In the sample above, a recreation of Barack Obama can be heard alongside Donald Trump and Hillary Clinton.

Lyrebird claims this innovation can take AI software a step further by offering new speech synthesis solutions to developers. Users will be able to generate entire dialogues with the voice of their choice, or design from scratch completely new and unique voices tailored for their needs. Suited to a wide range of applications, the algorithm can be used for personal assistants, reading of audio books with famous voices, speech synthesis for people with disabilities, connected devices of any kind, animated movies or video game characters.

Lyrebird relies on deep learning models developed at the MILA lab in the University of Montréal, where its three founders are currently PhD students: Alexandre de Brébisson, Jose Sotelo and Kundan Kumar. The startup is advised by three of the most prolific professors in the field: Pascal Vincent, Aaron Courville and Yoshua Bengio. The latter, director of the MILA and AI pioneer, wants to make Montréal a world leader in artificial intelligence and this new startup is part of that vision.

While the quality and flow may seem a little distorted in the above clip, the overall recreation is uncanny. Given how quickly information technology tends to improve these days, even better versions with near-perfect mimicry will surely emerge within the next few years. The implications are both amusing and, at the same time, rather alarming: when combined with real-time face capture software, such as Face2Face, it could be relatively easy to depict famous people making statements they never actually said in the real world.

"The situation is comparable to Photoshop," says de Brébisson. "People are now aware that photos can be faked. I think in the future, audio recordings are going to become less and less reliable [as evidence]."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

10th April 2017

Robotics breakthrough could lead to fully automated warehouses

RightHand Robotics, a startup company near Boston, has unveiled a new automated picking solution for warehouses that can recognise and retrieve individual items from boxes.

 

 

 

With an ongoing explosion in e-commerce – combined with a shrinking workforce – pressures have never been higher on warehouses to fulfil orders faster and more efficiently. To address these challenges, a new startup company has developed "RightPick", a combined hardware and software solution that handles the key task of picking individual items, or "piece-picking."

RightHand Robotics (RHR), the developers of RightPick, are based in Massachusetts, USA. The team was formed by a collaboration between researchers from Harvard's Biorobotics Lab, the Yale Grab Lab, and MIT, focused on groundbreaking research into grasping systems, intelligent hardware sensors, computer vision and applied machine learning.

Unlike traditional factory robots that can be complex to setup and with fixed uses, RHR create machines that are simple to integrate and highly flexible. The new system they have demonstrated in the video above can automate a task that robots have previously struggled to master: recognising and retrieving individual items from boxes; up to 600 per hour. This core competency represents a significant advance towards fully automated warehouses that could remove the need for humans. As e-commerce continues to grow, the trend is away from bulk or pallet-load handling, toward single SKUs and piecemeal items.

"The supply chain of the future is more about pieces than pallets," says Leif Jentoft, a co-founder of RHR. "RightHand can help material handling, third-party logistics and e-commerce warehouses lower costs by increasing automation."

RightPick is capable of handling thousands of different items, using a machine learning backend, coupled with a sensorised robot hand that works in concert with all industry-leading robotic arms. The time to value of RightPick can be demonstrated in a matter of hours, as it offers rapid setup, remote support, and easy integration. The machine is able to quickly demonstrate value in a wide variety of workflows – such as sorting batch-picked items, picking items from Automatic Storage and Retrieval Systems (ASRS), inducting items to a unit sorter, order quality assurance, and more. RHR also announced that it has raised $8 million in Series A funding from various companies and angel investors.

 

future timeline robots
Credit: RightHand Robotics

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

6th April 2017

Law requires reshaping as AI and robotics alter employment, states new report

The present wave of automation – driven by rapid advances in artificial intelligence (AI) – is creating a gap between current legislation and the new laws necessary for an emerging workplace reality, states the International Bar Association (IBA).

 

future timeline technology singularity ai

 

"Certainly, technological revolution is not new, but in past times it has been gradual. What is new about the present revolution is the alacrity with which change is occurring, and the broadness of impact being brought about by AI and robotics," says Gerlind Wisskirchen, Vice Chair for Multinationals in the IBA's Global Employment Institute (GEI) and coordinator of a major report, Artificial Intelligence and Robotics and Their Impact on the Workplace. "Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI, and the legislation once in place to protect the rights of human workers may be no longer fit for purpose, in some cases."

The IBA, established in 1947, is the world's leading organisation of international legal practitioners, bar associations and law societies. Through its global membership of individual lawyers, law firms, bar associations and law societies, it influences the development of international law reform and shapes the future of the legal profession throughout the world.

"The AI phenomenon is on an exponential curve, while legislation is doing its best on an incremental basis," adds Wisskirchen. "New labour and employment legislation is urgently needed to keep pace with increased automation."

 

future timeline technology singularity humanity ai

 

The comprehensive, 120-page report focuses on potential future trends of AI, and the likely impact intelligent systems will have on: the labour market, the structures of companies, employees' working time, remuneration and the working environment. In addition to illustrating the thread and importance of law in relation to these areas, the GEI assesses the law at different points in the automation cycle – from the developmental stage, when the computerisation of an industry begins, to what workers may experience as AI becomes more prevalent, through to issues of responsibility when things go wrong. These components are not examined in isolation, but in the context of economics, business and social environments.

In the example of the automotive industry, the report identifies competitive disadvantage between Europe and the United States in the developmental stage of autonomous driving. Germany and the US are recognised as the market leaders in this area. However, in contrast to the US, European laws prevent autonomous driving on public roads, though there are some exceptions for research vehicles. US companies are not faced with the same restrictions; they are therefore able to develop at a faster pace and as a result are likely to bring products to market sooner than their European competitors. Europe's restrictive older regulations impede technical progress of autonomous driving for companies operating within its borders, potentially placing them at a disadvantage in the marketplace.

Since motor vehicles will be driven by fully automated systems in the future, it is conceivable that jobs such as truck, taxi, or forklift drivers will be eliminated in the long run. The report states there is a 90% likelihood of this happening, with developers of connected trucks stating: "technical changes that will take place in the next 10 years will be more dramatic than the technical advancements over the last 50 or 60 years". The report points to cost savings of nearly 30% as logistics become cheaper, more reliable and more flexible. At the fully automated stage, costs will be further reduced as the requirement for rest breaks is eliminated, illness or inebriation is no longer a risk factor, and accidents are minimised.

 

self driving truck future timeline

 

Nevertheless, the report examines the issue of liability when failure does occur, concluding that: "The liability issues may become an insurmountable obstacle to the introduction of fully automated driving." Currently, driver responsibility is assumed in most cases, with the manufacturer liable only for product defects, and vehicle owners subject to special owner's liability, particularly in European countries. However, if a vehicle is fully automated, with a human driver no longer actively steering, the question arises as to whether damage can still be attributed to the driver or the owner of the car, or whether only the manufacturer of the system can be held liable.

The report's authors examine whether rules applicable to other automated areas, such as aviation, can be applied, but reason that: "it is not possible to apply the liability rules from other automated areas to automated driving", and that international liability standards with clear rules are needed.

Pascale Lagesse, Co-Chair of the IBA GEI, commented: "Without a doubt – AI, robotics and increased automation will bring about changes in society at every level, in every sector and in every nation. This fourth industrial revolution will concurrently destroy and create jobs and paradoxically benefit and impair workers in ways that are not entirely clear, or not yet imagined. What is evident, however, is that a monumental paradigm shift is occurring and that concurrent legal uncertainties need to be addressed within labour and employment laws geared to the technological developments."

She added: "Greater governmental collaboration across borders may be necessary if commerce is to thrive. States as lawmakers will have to be bold in decision, determining what jobs should be performed exclusively by humans, for example: caring for babies; perhaps introducing human quotas in different sectors; taxing companies where machines are used; and maybe introducing a 'made by humans' label for consumer choice. Our new report posits these ideas and more, and could not be more timely."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

5th March 2017

AI beats top human players at poker

The University of Alberta has announced details of DeepStack, a new artificial intelligence program able to beat professional human players at poker for the first time.

 

deepstack poker ai

 

In 1952, Professor Sandy Douglas created a tic-tac-toe game on the EDSAC, a room-sized computer at the University of Cambridge. One of the first ever computer games, it was developed as part of a thesis on human-computer interaction. Forty-five years later, in 1997, another milestone occurred when IBM's Deep Blue machine defeated Garry Kasparov, the world chess champion. This was followed by Watson, again created by IBM, which appeared on the Jeopardy! game show and beat the top human players in 2011. Yet another breakthrough was Google's DeepMind AlphaGo, which in 2016 defeated the Go world champion Lee Se-dol at a tournament in South Korea.

Now, for the first time ever, an artificial intelligence program has beaten human professional players at heads-up, no-limit Texas hold 'em, a variation of the card game of poker. This historic result in AI has implications far beyond the poker table – from helping to make more decisive medical treatment recommendations to developing better strategic defence planning.

DeepStack has been created by the University of Alberta's Computer Poker Research Group. It bridges the gap between games of "perfect" information – like in checkers, chess, and Go, where both players can see everything on the board – and "imperfect" information games, by reasoning while it plays, using "intuition" honed through deep learning to reassess its strategy with each decision.

"Poker has been a long-standing challenge problem in artificial intelligence," said computer scientist Michael Bowling, principal investigator on the study. "It's the quintessential game of imperfect information, in the sense that players don't have the same information or share the same perspective while they're playing."

Artificial intelligence researchers have long used parlour games to test their theories because the games are mathematical models that describe how decision-makers interact.

"We need new AI techniques that can handle cases where decision-makers have different perspectives," said Bowling. "Think of any real-world problem. We all have a slightly different perspective of what's going on, much like each player only knowing their own cards in a game of poker."

 

deepstack poker ai

 

This latest discovery builds on previous research findings about artificial intelligence and imperfect information games stretching back to the creation of the Computer Poker Research Group in 1996. DeepStack extends the ability to think about each situation during play to imperfect information games using a technique called continual re-solving. This allows the AI to determine the correct strategy for a particular poker situation by using its "intuition" to evaluate how the game might play out in the near future, without thinking about the entire game.

"We train our system to learn the value of situations," said Bowling. "Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game."

Thinking about each situation as it arises is important for complex problems like heads-up no-limit hold'em, which has more unique situations than there are atoms in the universe, largely due to players' ability to wager different amounts including the dramatic "all-in." Despite the game's complexity, DeepStack takes action at human speed – with an average of only three seconds of "thinking" time – and runs on a simple gaming laptop.

To test the approach, DeepStack played against a pool of professional human players recruited by the International Federation of Poker. A total of 33 players from 17 countries were asked to play in a 3,000-hand match, over a period of four weeks. DeepStack beat each of the 11 players who finished their match, with only one outside the margin of statistical significance.

A paper on this study, DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker, is published in the journal Science.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

28th February 2017

"Handle" – the latest robot from Boston Dynamics

Boston Dynamics, the engineering and robotics company best known for the development of BigDog, has revealed its latest creation. "Handle" is a research robot that stands 6.5 ft tall, travels at 9 mph and jumps 4​ ​feet vertically. ​It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles on one battery charge. ​​​Handle uses many of the same dynamics, balance and mobile manipulation principles​ found in previous quadruped and biped robots built by the company, but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs, Handle can have the best of both worlds.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th February 2017

Types of Artificial Intelligence

This is a guest piece by forum member Yuli Ban.

 

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on the Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000. So my system is this:

• Weak Narrow AI
• Strong Narrow AI
• Weak General AI
• Strong General AI
• Super AI

 


 

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analogue mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.

We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not artificial general intelligence.

I didn’t use that mention of analogue mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need SAI to run a Word document. Maybe SAI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.

Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses is the absolute bottom of biological intelligence.

WNAI can basically do one thing really well, but cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

 

pocket calculator ai artificial intelligence

 

 

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand.

SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognise speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognise some of your favourite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favourite band of its own. It was programmed to know these things, based on your own preferences.

Even if Siri says it’s “not an AI”, it’s only using pre-programmed responses to say so. SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

 

strong narrow AI
Credit: ymgerman

 

 

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using a differentiable neural computing (DNC) system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.

Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”

DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant (VA) to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?

If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.

WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong general AI and the only thing we lack is the proper power and training.

 

weak general ai

 

 

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or SAI.

Right now, we have no analogue to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code, while SGAI is us building a whole digital brain.

Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if need be if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

 

strong general artificial intelligence ai

 

 

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.

The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.

Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.

That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5'8" primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.

An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form. Basically, the moment you create SGAI is the moment you create ASI. From that bit of information, you can begin to understand what AI will be capable of achieving.

 

superintelligent ai

 

 


 

Recap:
“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.

Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.

Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any pre-programming. 

All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn't mean it's now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good as, or better than any human. It has sapience. SGAI may be very humanlike, but it's ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It's fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

31st January 2017

Cafe X Technologies launches robotic cafe

Cafe X Technologies, a new startup based in San Francisco, yesterday opened its first robotic cafe in the U.S. By combining machine learning and robotics, it aims to eliminate the variabilities that bog down today’s coffee experience.

 

cafe x technologies robotic arm
Credit: Cafe X Technologies

 

Cafe X Technologies, in partnership with WMF (a leading international maker of coffee machines), has developed a fully automated robotic cafe that integrates hardware and software to blend the functionality of baristas with specialty coffee preparation methods. Cafe X sets itself apart by removing the on-site wait time, the potential for preparation error, and other unexpected variability – which it claims will set a new standard for automation technology and the specialty coffee service industry.

“I’ve long been a big coffee consumer and there’s never a guaranteed seamless experience,” says founder and CEO, 23-year-old Henry Hu. “In today’s world, you have two options for getting a cup of coffee: you’re either in and out with something subpar or you’re waiting in a 15-minute line for a great cappuccino. I started Cafe X to eliminate that inherent compromise and give people access to a tasty cup of coffee consistently and conveniently.”

Customers can order customised espresso-based beverages on the spot at the ordering kiosk, or they can download the Cafe X app onto their mobile device to order in advance. Once the beverage is ready, they use touch screens on the cafe to type in a 4-digit order number which is either sent via text message or displayed on the Cafe X mobile app for iOS and Android. The Mitsubishi robot arm will then identify the customer’s drink from the waiting stations and deliver it to them within seconds.

 

cafe x technologies robotic arm
Credit: Cafe X Technologies

 

The machine is very fast – capable of preparing up to 120 drinks per hour, depending on the complexity of the orders. Customers can choose the brand of beans and customise the amount of milk and flavours used.

“This won’t replace baristas, or the coffee shop experience that so many people have come to love – we don’t aim to do that,” says Hu. “What we’re offering is the best possible experience for people who are looking for consistent specialty coffee to-go.”

Cafe X has formed unique partnerships with local roasters to source specific premium ingredients and create unique drinks which are programed into the automated coffee systems. The menu prices start at $2.25 for an 8 oz cup and will vary depending on the customer’s coffee bean selection which includes single origin options.

“There’s an entire segment of the consumer population that’s not buying coffee because it doesn’t fit into their present moment,” says John Laird, CEO of AKA Roasters. “We’re truly excited to partner with the Cafe X team to expand to those customers we might not otherwise reach.”

This machine has been installed at San Francisco's Metreon shopping centre. Hu is now in talks with a number of San Francisco-based tech companies to install Cafe X kiosks in their offices.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

27th January 2017

AI matches humans on standard visual intelligence test

Researchers at Northwestern University have developed an AI system that performs at human levels on a standard visual intelligence test.

 

 

ai vs human visual intelligence test
An example question from the Raven's Progressive Matrices standardised test. The test taker should choose answer D, because the relationships between it and the other elements in the bottom row are most similar to the relationships between the elements of the top rows.

 

A team at Northwestern University has developed a computational model that performs at human levels on a standard visual intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

"The model performs in the 75th percentile for American adults, making it better than average," said Professor Ken Forbus, who holds a PhD in Artificial Intelligence from MIT. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."

The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but could also potentially shrink the gap between computer and human cognition.

The new computational model is built on CogSketch, an AI platform developed in Forbus's laboratory that can solve visual problems and understand sketches by using a process of analogy. He developed the system with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research is published this month in the journal Psychological Review.

While Forbus and Lovett's system can be used to model general visual problem-solving phenomena, they specifically tested it on Raven's Progressive Matrices, a nonverbal standardised test that measures abstract reasoning. All of the test's problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix.

"The Raven's test is the best existing predictor of what psychologists call 'fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,'" said Lovett, now a researcher at the US Naval Research Laboratory. "Our results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence."

The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as "the clock is above the door" or "pressure differences cause water to flow." These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.

"Most artificial intelligence research today concerning vision focuses on recognition, or labelling what is in a scene rather than reasoning about it," Forbus said. "But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

24th January 2017

EU considers “electronic personhood” for robots

The European Parliament has proposed a new legal framework to govern the rapidly evolving fields of robotics and artificial intelligence (AI).

 

human robot hands electronic persons personhood
© AP Images/European Union-EP

 

The European Parliament's Legal Affairs Committee has voted by a majority of 17 votes to two, with two abstentions, to create a robot "bill of rights" covering a range of issues relating to automation and machine intelligence.

“A growing number of areas of our daily lives are increasingly affected by robotics,” said rapporteur Mady Delvaux. “In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework.”

Delvaux's report looks at robotics-related issues such as liability, safety and changes in the labour market. Members of the European Parliament (MEPs) have urged the Commission to consider creating a European agency for robotics and artificial intelligence to supply public authorities with technical, ethical and regulatory expertise. They also propose a voluntary ethical conduct code to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards.

For example, this code should recommend that robot designers include “kill” switches so that robots can be turned off in emergencies, they add. Harmonised rules are especially urgently needed for self-driving cars. They call for an obligatory insurance scheme and a fund to ensure victims are fully compensated in cases of accidents caused by driverless cars.

In the longer-term, the possibility of creating a specific legal status of “electronic persons” for the most sophisticated autonomous robots – so as to clarify responsibility in cases of damage – should also be considered, the MEPs say. The development of robotics is likely to have big societal changes, including the loss of jobs in certain fields, says the text. It urges the Commission to follow these trends closely, including new employment models and the viability of the current tax and social system for robotics. The full house will vote on the draft proposals in February, which will need to be approved by absolute majority according to the legislative initiative procedure.

From warehouse machines to surgical assistance devices, there are now over 1.7 million robots already in existence worldwide. On current trends, industrial and personal service robots could outnumber humans by the 2040s. But despite their rapidly increasing numbers and abilities, their use is still not properly regulated.

"There is a possibility that – within the space of a few decades – AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species," the Committee's report states.

 

future robot 2040 technology timeline

 

“We are not talking about weapons,” says Delvaux. “We define robots as physical machines, equipped with sensors and interconnected so they can gather data. The next generation of robots will be more and more capable of learning by themselves. The most high-profile ones are self-driving cars – but they also include drones, industrial robots, care robots, entertainment robots, toys, robots in farming.

“When self-learning robots arise, different solutions will become necessary and we are asking the Commission to study options. One could be to give robots a limited 'e-personality' [comparable to 'corporate personality', a legal status which enables firms to sue or be sued] at least where compensation is concerned. It is similar to what we now have for companies, but it is not for tomorrow. What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years.”

So in the meantime, who should be responsible in case of damage? The owner, the manufacturer, the designer, or the programmer?

“We have two options,” Delvaux continues. “According to the principle of strict liability it should be the manufacturer who is liable, because he is best placed to limit the damage and deal with providers. The other option is a risk assessment approach, according to which tests have to be carried out beforehand and compensation has to be shared by all stakeholders. We also propose there should be compulsory insurance, at least for the big robots.”

Delvaux's report also mentions that some vulnerable people can become emotionally attached to their care robots. How can we prevent this happening?

“We always have to remind people that robots are not human, and will never be,” she says. “Although they might appear to show empathy, they cannot feel it. We do not want robots like they have in Japan, which look like people. We have proposed a charter setting out that robots should not make people emotionally dependent on them. You can be dependent on them for physical tasks – but you should never think that a robot loves you or feels your sadness.”

People who fear they will lose their jobs are told that robots will actually create new jobs. However, they might only create roles for highly-skilled people and replace low-skilled workers. How can this be solved?

“I believe this is the biggest challenge to our society and to our educational systems,” she adds. “We do not know what will happen. I believe there will always be low skilled jobs. Robots will not replace humans; there will be a cooperation between both. We ask the Commission to look at the evolution, what kind of tasks will be taken over by robots. It can be a good thing if they are used for hard work. For example, if you have to carry heavy goods or if the job is dangerous. We have to monitor what is happening and then we have to be prepared for every scenario.”

The report also deals with the issue of whether we should change our social security systems and think about a universal basic income (UBI), because if there are huge numbers of permanently unemployed people, we have to ensure they can have a decent life.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

6th January 2017

IBM predicts five innovations for the next five years

IBM has unveiled its annual "5 in 5" – a list of ground-breaking innovations that will change the way people work, live, and interact during the next five years.

 

 

 

In 1609, Galileo invented the telescope and saw our cosmos in an entirely new way. He proved the theory that the Earth and other planets in our Solar System revolve around the Sun, which until then was impossible to observe. IBM Research continues this work through the pursuit of new scientific instruments – whether physical devices or advanced software tools – designed to make what's invisible in our world visible, from the macroscopic level down to the nanoscale.

"The scientific community has a wonderful tradition of creating instruments to help us see the world in entirely new ways. For example, the microscope helped us see objects too small for the naked eye, and the thermometer helped us understand the temperature of the Earth and human body," said Dario Gil, vice president of science & solutions at IBM Research. "With advances in artificial intelligence and nanotechnology, we aim to invent a new generation of scientific instruments that will make the complex invisible systems in our world today visible over the next five years."

Innovation in this area could dramatically improve farming, enhance energy efficiency, spot harmful pollution before it's too late, and prevent premature physical and mental decline. IBM's global team of scientists and researchers is steadily bringing these inventions from laboratories into the real world.

The IBM 5 in 5 is based on market and societal trends, as well as emerging technologies from research labs around the world that can make these transformations possible. Below are the five scientific instruments that will make the invisible visible in the next five years.

 


 

With AI, our words will open a window into our mental health

In five years, what we say and write will be used as indicators of our mental health and physical well-being. Patterns in our speech and writing analysed by new cognitive systems – including meaning, syntax and intonation – will provide tell-tale signs of early-stage developmental disorders, mental illness and degenerative neurological diseases to help doctors and patients better predict, monitor and track these conditions. What were once invisible signs will become clear signals of patients' likelihood of entering a certain mental state, or how well their treatment plan is working, complementing regular clinical visits with daily assessments from the comfort of their homes.

 

ibm five in five
Credit: IBM

 

 

 

Hyperimaging and AI will give us superhero vision

In five years, new imaging devices using hyperimaging technology and AI will help us "see" beyond visible light, by combining multiple bands of the electromagnetic spectrum. This will reveal valuable insights or potential dangers that may otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and widely accessible in our daily lives, giving us the ability to perceive or see through objects and opaque environmental conditions anytime, anywhere.

A view of invisible, or vaguely visible objects around us, could help make road and traffic conditions clearer for drivers and self-driving cars. For example, by using millimetre wave imaging, a camera and other electromagnetic sensors, hyperimaging technology could help a vehicle see through fog or rain, detect hazardous and hard-to-see road conditions such as black ice, or tell us if there is some object up ahead – as well as its distance and size. Cognitive computing technologies will reason about this data and recognise what might be a tipped over garbage can versus a deer crossing the road or a pot hole that could result in a flat tire.

 

ibm five in five
Credit: Lenovo

 

 

 

Macroscopes will help us understand Earth's complexity in infinite detail

Instrumenting and collecting masses of data from every source in the physical world, big and small, and bringing it together will reveal comprehensive solutions for our food, water and energy needs. Today, the physical world only gives us a glimpse into our highly interconnected and complex ecosystem. We collect exabytes of data – but most of it is unorganised. In fact, an estimated 80 percent of a data scientist's time is spent scrubbing data instead of analysing and understanding what that data is trying to tell us.

Thanks to the Internet of Things (IoT), new sources of data are pouring in from millions of connected objects – from refrigerators, light bulbs and heart rate monitors, to remote sensors such as drones, cameras, weather stations, satellites and telescope arrays. There are already more than six billion connected devices generating tens of exabytes of data per month, with a growth rate of over 30% each year. After successfully digitising information, business transactions and social interactions, we are now in the process of digitising the physical world.

By 2022, we will use machine learning algorithms and software to organise the information about the physical world, bringing the vast and complex data gathered by billions of devices within the range of our vision and understanding. IBM calls this idea a "macroscope" – but unlike microscopes to see the very small, or telescopes that can see far away, this will be a system to gather all of Earth's complex data together to analyse it for meaning.

By aggregating, organising and analysing data on climate, soil conditions, water levels and their relationship to irrigation practices, for example, a new generation of farmers will have insights that help them determine the right crop choices, where to plant them and how to produce optimal yields while conserving precious water supplies.

 

internet of things agriculture

 

 

 

Medical labs "on a chip" will serve as health detectives for tracing disease at the nanoscale

In five years, new medical labs on a chip will serve as nanotechnology health detectives – tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyse a disease that would normally be carried out in a full-scale biochemistry lab.

Lab-on-a-chip technology will eventually be packaged in a handheld device. This will allow people to quickly and regularly measure the presence of biomarkers found in small amounts of bodily fluids – such as saliva, tears, blood and sweat – sending this information securely into the cloud from the comfort of their home. There it will be combined with real-time health data from other IoT-enabled devices, like sleep monitors and smart watches, and analysed by AI systems for insights. Taken together, this data will give an in-depth view of our health, alerting us to the first signs of trouble – helping to stop disease before it progresses.

IBM scientists are developing nanotechnology that can separate and isolate bioparticles down to 20 nanometres in diameter, a scale that gives access to DNA, viruses, and exosomes. These particles could be analysed to potentially reveal the presence of disease even before we have symptoms.

 

IBM five in five
Medical lab on a chip. Credit: IBM

 

 

 

Smart sensors will detect environmental pollution at the speed of light

In five years, new sensing technologies deployed near natural gas extraction wells, around storage facilities, and along distribution pipelines will enable the industry to pinpoint invisible leaks in real-time. Networks of IoT sensors wirelessly connected to the cloud will provide continuous monitoring of natural gas infrastructure, allowing leaks to be found in a matter of minutes instead of weeks, reducing pollution and waste and the likelihood of catastrophic events.

IBM is researching silicon photonics – an emerging technology that transfers data by light, for computing literally at the speed of light. These chips could be embedded in a network of sensors on the ground or within infrastructure, or even fly on autonomous drones; generating insights that, combined with real-time wind data, satellite data, and other historical sources, will produce complex environmental models to detect the origin and quantity of pollutants as they occur.

 

smart sensor
Credit: IBM

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

28th December 2016

Giant robot could patrol military borders

A four metre (13 ft), 1.5-ton bipedal robot, designed and built by military scientists in South Korea, has taken its first steps.

As demonstrated in the video below, "Method-2" can hold a pilot who sits inside the torso and controls its arms and legs, allowing it to walk. Fans of science fiction movies like Aliens and Avatar will notice the similarity with hi-tech machines depicted in those films. The robot is so large and heavy that it shakes the ground nearby when walking.

Its creation has involved the work of 30 engineers at robotics company Hankook Mirae Technology, guided by lead designer Vitaly Bulgarov who has previously worked on films such as Transformers, Robocop and Terminator.

"One of the most common questions we get is about the power source," he said on Facebook. "The company’s short-term goals include developing robotic platforms for industrial areas where having a tethered robot is not an issue. Another short-term, real world application includes mounting only the top part of the robot on a larger wheeled platform – solving the problem of locomotion through an uneven terrain, as well as providing enough room for sufficient power source."

“Our robot is the world’s first manned bipedal robot and is built to work in extreme hazardous areas where humans cannot go (unprotected),” said company chairman Yang Jin-Ho. He has invested 242bn won ($200 million) in the project since 2014 to "bring to life what only seemed possible in movies and cartoons".

The company has already received inquiries from manufacturing, construction, entertainment and other industries. There have even been questions about its possible deployment along the Demilitarised Zone with North Korea. It might also be used for cleaning up disaster sites like Fukushima. However, the machine needs further research and development first to improve its balance and power systems. At present, it remains tethered by a power cable, but if all goes according to plan, it should be able to move more freely within the next couple of years. The price tag for Method-2 will be 10bn won ($8.3 million).

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

27th December 2016

The future of robotics: 10 predictions for 2017 and beyond

Technology research firm International Data Corporation (IDC) has published a report highlighting the key drivers for robotics and how these are likely to accelerate developments from 2017 through 2020.

 

robotics future technology predictions 2017 2020

 

International Data Corporation (IDC) has published a report titled “IDC FutureScape: Worldwide Robotics 2017 Predictions", which highlights the key drivers for robotics and how these are likely to shape the development of technology in the planning horizon of 2017 through 2020.

"Technological development in artificial intelligence, computer vision, navigation, MEMS sensor, and semiconductor technologies continue to drive innovation in the capability, performance, autonomy, ease of use, and cost-effectiveness of industrial and service robots," says Dr. Jing Bing Zhang, Research Director.

Dr. Zhang revealed the strategic top ten predictions and major robotics trends that are set to present both opportunities and challenges to IT leaders during 2017 and beyond.

 

robot fruit picker

 

Prediction 1: Robot as a Service. By 2019, 30% of commercial service robotic applications will be in the form of a "Robot as a Service" business model, reducing costs for robot deployment.

Prediction 2: Chief Robotics Officer. By 2019, 30% of leading organisations will implement a Chief Robotics Officer role and/or define a robotics-specific function within the business.

Prediction 3: Evolving Competitive Landscape. By 2020, companies will have a greater choice of vendors, as new players enter the US$80-billion Information & Communications Technology (ICT) market to support robotics deployment.

Prediction 4: Robotics Talent Crunch. By 2020, robotics growth will accelerate the talent race, leaving 35% of robotics-related jobs vacant, while the average salary increases by at least 60%.

Prediction 5: Robotics Will Face Regulation. By 2019, the Government will begin implementing robotics-specific regulations to preserve jobs and to address concerns of security, safety, and privacy.

 

hospital robot
Automated hospital pharmacy, capable of tracking, preparing and dispensing medication automatically.

 

Prediction 6: Software Defined Robot. By 2020, 60% of robots will depend on cloud-based software to define new skills, cognitive capabilities, and application programs, leading to the formation of a "robotics cloud" marketplace.

Prediction 7: Collaborative Robot. By 2018, 30% of all new robotic deployments will be smart collaborative robots that operate three times faster than today's robots and are safe for work around humans.

Prediction 8: Intelligent RoboNet. By 2020, 40% of commercial robots will become connected to a mesh of shared intelligence, resulting in 200% improvement in overall robotic operational efficiency.

Prediction 9: Growth Outside Factory. By 2019, 35% of leading organisations in logistics, health, utilities, and resources will explore the use of robots to automate operations.

Prediction 10: Robotics for E-commerce. By 2018, 45% of the 200 leading global e-commerce and omni-channel commerce companies will deploy robotics systems in their order fulfilment warehousing and delivery operations.

"Robotics will continue to accelerate innovation, thus disrupting and changing the paradigm of business operations in many industries. IDC expects to see stronger growth of robotics adoption outside the traditional manufacturing factory floor, including logistics, health, utilities and resources industries. We encourage end-user companies to embrace and assess how robotics can sharpen their company's competitive edge by improving quality, increasing operational productivity and agility, and enhancing experiences of all stakeholders," Dr. Zhang concludes.

 

robotic kitchen hands
Robotic kitchen hands. Credit: Moley Robotics

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

1st December 2016

Almost half of tech professionals expect their job to be automated within ten years

45% of technology professionals believe a significant part of their job will be automated by 2027 – rendering their current skills redundant. Changes in technology are so rapid that 94% say their career would be severely limited if they didn't teach themselves new technical skills.

That's according to the Harvey Nash Technology Survey 2017, representing the views of more than 3,200 technology professionals from 84 countries.

The chance of automation varies greatly with job role. Testers and IT Operations professionals are most likely to expect their job role to be significantly affected in the next decade (67% and 63% respectively). Chief Information Officers (CIOs), Vice Presidents of Information Technology (VP IT) and Programme Managers expect to be least affected (31% and 30% respectively).

David Savage, associate director, Harvey Nash UK, commented: "Through automation, it is possible that ten years from now the Technology team will be unrecognisable in today's terms. Even for those roles relatively unaffected directly by automation, there is a major indirect effect – anything up to half of their work colleagues may be machines by 2027."

 

future timeline automation technology 2027

 

In response to automation technology, professionals are prioritising learning over any other career development tactics. Self-learning is significantly more important to them than formal training or qualifications; only 12 per cent indicate "more training" as a key thing they want in their job and only 27% see gaining qualifications as a top priority for their career.

Despite the increase in automation, the survey reveals that technology professionals remain in high demand, with participants receiving at least seven headhunt calls in the last year. Software Engineers and Developers are most in demand, followed by Analytics / Big Data roles. Respondents expect the most important technologies in the next five years to be Artificial Intelligence, Augmented / Virtual Reality and Robotics, as well as Big Data, Cloud and the Internet of Things. Unsurprisingly, these are also the key areas cited in what are the "hot skills to learn".

"Technology careers are in a state of flux," says Simon Hindle, a director at Harvey Nash Switzerland. "On one side, technology is 'eating itself', with job roles increasingly being commoditised and automated. On the other side, new opportunities are being created, especially around Artificial Intelligence, Big Data and Automation. In this rapidly changing world, the winners will be the technology professionals who take responsibility for their own skills development, and continually ask: 'where am I adding value that no other person – or machine – can add?'"

 

future timeline automation technology 2027

 

Key highlights from the Harvey Nash Technology Survey 2017:

AI growth: The biggest technology growth area is expected to be Artificial Intelligence (AI). 89% of respondents expect it to be important to their company in five years' time, almost four times the current figure of 24%.

Big Data is big, but still unproven. 57% of organisations are implementing Big Data at least to some extent. For many, it is moving away from being an 'experiment' into something more core to their business; 21% say they are using it in a 'strategic way'. However, only three in ten organisations with a Big Data strategy are reporting success to date.

Immigration is key to the tech industry, and Brexit is a concern. The technology sector is overwhelmingly in favour of immigration; 73% believe it is critical to their country’s competitiveness. 33% of respondents to the survey were born outside the country they are currently working. Almost four in ten tech immigrants in the UK are from Europe, equating to one in ten of the entire tech working population in the UK. Moreover, UK workers make up over a fifth of the tech immigrant workforce of Ireland and Germany.

Where are all the women? This year's report reveals that 16% of respondents are women; not very different from the 13% who responded in 2013. The pace of change is glacial and – at this rate – it will take decades before parity is reached.

Tech people don't trust the cloud. Four in ten have little or no trust in how cloud companies are using their personal data, while five in ten at least worry about it. Trust in the cloud is affected by age (the older you are, the less you trust).

The end of the CIO role? Just 3% of those under 30 aspire to be a CIO; instead they would prefer to be a CTO (14% chose this), entrepreneur (19%) or CEO (11%). This suggests that the traditional role of the CIO is relatively unattractive to Gen Y.

Headhunters' radar: Software Engineers and Developers get headhunted the most, followed closely by Analytics / Big Data roles. At the same time, 75% believe recruiters are too focused on assessing technical skills, and overlook good people as a result.

 

cloud computing future timeline technology 2027

 

 

Supporting data from the survey (global averages):

 

Which technologies are important to your company now, and which do you expect to be important in five years' time?

job automation future of work

 

Agree or disagree? Within ten years, a significant part of my job that I currently perform will be automated.

job automation future of work

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th November 2016

Tesla demonstrates its self-driving car technology

Innovative American car company, Tesla, has released a video showcasing its self-driving car technology that would be included in all forthcoming vehicles that it manufactures.

 

 

 

The video above demonstrates just how advanced Tesla's Enhanced Autopilot hardware is. The time-lapse footage follows the car on its journey as it correctly follows the rules of the road by identifying road signs, traffic management systems and other road users. A person is seen sat in the car but the video makes clear that this is purely for legal reasons.

The automated system comes equipped with eight cameras, providing full 360° visibility around the vehicle at up to 250 metres' range. A dozen updated ultrasonic sensors detect both hard and soft objects at nearly twice the distance of Tesla's previous hardware. A forward-facing radar gives additional data about the driving environment on a redundant wavelength that is able to see through heavy rain, fog, dust and even the car ahead.

Tesla's Chief Executive Elon Musk certainly has faith in the technology and has predicted that by the end of 2017 a Tesla will be able to drive itself from one US coast to the other. Drivers wanting to adopt this new technology will have to be patient, however, as in addition to legal legislation, Tesla plans to conduct millions of miles of testing to ensure the safety of operating the system.

 

tesla self driving car technology future timeline

 

Earlier this month, Tesla agreed a deal to buy German automated car specialists, Grohmann Engineering, in a bid to accelerate production. The firm's founder Klaus Grohmann will also be joining Tesla to head a new division within the automaker, called Tesla Advanced Automation Germany.

"Because automation is such a vital part of the future of Tesla, the phrase I've used before is that it's about building the machine that's building the machine," Musk commented. "That actually becomes more important than the machine itself as the volume increases. We think it's important to bring in world-class engineering talent and our first choice was Grohmann."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th November 2016

Machine learning can identify a suicidal person

Using a person's spoken or written words, a new computer algorithm identifies with high accuracy whether that person is suicidal, mentally ill but not suicidal, or neither.

 

brain words algorithm

 

A new study shows that technology known as machine learning is up to 93% accurate in correctly classifying a suicidal person and 85% accurate in identifying a person who has a mental illness but is not suicidal, or neither. These results provide strong evidence for using intelligent software as a decision-support tool to help clinicians and caregivers identify and prevent suicidal behaviour.

"These computational approaches provide novel opportunities to apply technological innovations in suicide care and prevention, and it surely is needed," explains John Pestian, PhD, professor in Biomedical Informatics & Psychiatry at Cincinnati Children's Hospital Medical Centre and the study's lead author. "When you look around healthcare facilities, you see tremendous support from technology, but not so much for those who care for mental illness. Only now are our algorithms capable of supporting those caregivers. This methodology can easily be extended to schools, shelters, youth clubs, juvenile justice centres, and community centres, where earlier identification may help to reduce suicide attempts and deaths."

Pestian and his team enrolled 379 patients over the study's 18 month period – from emergency departments as well as inpatient and outpatient centres across three sites. Those enrolled included patients who were suicidal, diagnosed as mentally ill but not suicidal, or neither (serving as a control group).

Each patient completed standardised behavioural rating scales and participated in a semi-structured interview, answering five open-ended questions to stimulate conversation such as "Do you have hope?" "Are you angry?" and "Does it hurt emotionally?"

The researchers extracted and analysed both verbal and non-verbal language from the data. They then used machine learning algorithms to classify the patients into one of the three groups. Their results showed that machine learning algorithms could tell the difference between the groups with an accuracy of up to 93%. The scientists also noticed that the control patients tended to laugh more during interviews, sigh less, and express less anger, less emotional pain and more hope.

This software could become more and more useful in the future, as depression is expected to become the number one global disease burden by 2030. However, such intelligent algorithms may raise concerns over privacy and civil liberties, with potential for information to be abused. For example, authorities might use the software to spy on citizens as they communicate via email or social media, perhaps deciding from the data and wording style that a certain individual is dangerous and must be imprisoned, even if that person is actually innocent.

The study is published in the journal Suicide and Life-Threatening Behavior.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th November 2016

AI can beat humans at lip-reading

The University of Oxford has demonstrated "LipNet", a new AI algorithm capable of lip-reading over 40% more accurately than a real person.

 

 

 

2016 has been a big year for artificial intelligence, with many important breakthroughs that we've covered on our blog. Yet again, what was once confined to science fiction has become a reality, as this week a research team presented a new AI lip-reading system able to beat humans.

The University of Oxford's Department of Computer Science has developed "LipNet", a visual recognition system that can process whole sentences and learn which letter corresponds to the slightest mouth movement.

"The end-to-end model eliminates the need to segment videos into words before predicting a sentence," the research team explains. "LipNet requires neither hand-engineered spatiotemporal visual features, nor a separately-trained sequence model."

While an experienced human lip-reader can achieve accuracy of 52%, the LipNet efficiency is 93%. It's eerily reminiscent of HAL 9000, the sentient computer in Arthur C. Clarke's 2001: A Space Odyssey.

However, while LipNet has proven to be very promising, it is still at a relatively early stage of development. So far, it has been trained and tested on short, formulaic videos that show a well-lit person face-on. In its current form, LipNet could not be used on more challenging video footage – so it is currently unsuitable for use as a surveillance tool. But the team is keen to develop it further in real-world situations, especially as an aid for people with hearing disabilities.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th October 2016

AI predicts outcomes of human rights trials

The judicial decisions of the European Court of Human Rights (ECtHR) have been predicted to 79% accuracy using an artificial intelligence (AI) method developed by researchers at University College London (UCL), the University of Sheffield and the University of Pennsylvania.

The method is the first to predict the outcomes of a major international court by automatically analysing case text using a machine learning algorithm.

"We don't see AI replacing judges or lawyers, but we think they'd find it useful for rapidly identifying patterns in cases that lead to certain outcomes," explained Dr Nikolaos Aletras, who led the study at UCL Computer Science. "It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights."

In developing their method, the team found that judgements by the ECtHR are highly correlated to non-legal facts, rather than directly legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, 'realists' rather than 'formalists'. This supports findings from previous studies of the decision-making processes of other high level courts, including the US Supreme Court.

"The study, which is the first of its kind, corroborates the findings of other empirical work on the determinants of reasoning performed by high level courts. It should be further pursued and refined, through the systematic examination of more data," explained co-author Dr Dimitrios Tsarapatsanis, Lecturer in Law at the University of Sheffield.

 

AI robot judge

 

A team of computer and legal scientists from the UK worked alongside Daniel Preoțiuc-Pietro – a postdoctoral researcher in natural language processing and machine learning from the University of Pennsylvania – to extract case information published by the ECtHR. They identified English language data sets for 584 cases relating to Articles 3, 6 and 8 of the Convention. Article 3 forbids torture and inhuman and degrading treatment (250 cases); Article 6 protects the right to a fair trial (80 cases) and Article 8 provides a right to respect for one's "private and family life, his home and his correspondence" (254 cases). They then applied an AI algorithm to find patterns in the text. To prevent bias and mislearning, they selected an equal number of violation and non-violation cases.

The most reliable factors for predicting the court's final decision were found to be the language used, as well as the topics and the circumstances mentioned in the case text. The 'circumstances' section includes information about the factual background to the case. By combining the information extracted from the abstract 'topics' that the cases cover and 'circumstances' across data for all three Articles, an accuracy of 79% was achieved.

"Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge – so this is the first time judgements have been predicted using analysis of text prepared by the court," said co-author Dr Lampos, UCL Computer Science.

"There is no reason why it cannot be extended to understand testimonies from witnesses or lawyers' notes," said Dr Aletras.

The study appears in the journal PeerJ Computer Science.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st October 2016

AI milestone: a new system can match humans in conversational speech recognition

A new automated system that can achieve parity and even beat humans in conversational speech recognition has been announced by researchers at Microsoft.

 

AI conversational speech recognition future timeline

 

A team at Microsoft's Artificial Intelligence and Research group has published a study in which they demonstrate a technology that recognises spoken words in a conversation as well as a real person does.

Last month, the same team achieved a word error rate (WER) of 6.3%. In their new paper this week, they report a WER of just 5.9%, which is equal to that of professional transcriptionists and is the lowest ever recorded against the industry standard Switchboard speech recognition task.

“We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.”

“Even five years ago, I wouldn’t have thought we could have achieved this,” said Harry Shum, the group's executive vice president. “I just wouldn’t have thought it would be possible.”

Microsoft has been involved in speech recognition and speech synthesis research for many years. The company developed Speech API in 1994 and later introduced speech recognition technology in Office XP and Office 2003, as well as Internet Explorer. However, the word error rates for these applications were much higher back then.

 

speech recognition trend future timeline

 

In their new paper, the researchers write: "the key to our system's performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training."

The team used Microsoft’s own Computational Network Toolkit – an open source, deep learning framework. This was able to process deep learning algorithms across multiple computers, running a specialised GPU to greatly improve its speed and enhance the quality of research. The team believes their milestone will have broad implications for both consumer and business products, including entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana.

“This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.

“The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group.

Future improvements may also include speech recognition that works well in more real-life settings – places with lots of background noise, for example, such as at a party or while driving on the highway. The technology will also become better at assigning names to individual speakers when multiple people are talking, as well as working with a wide variety of voices, regardless of age, accent or ability.

The full study – Achieving Human Parity in Conversational Speech Recognition – is available at: https://arxiv.org/abs/1610.05256

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

19th September 2016

How AI might affect urban life in 2030

A diverse panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence might affect life in a typical North American city, and to spur discussion about how to ensure that AI is deployed in ways that are safe, fair and beneficial.

 

AI urban 2030

 

In December 2014, Stanford University began a century-long project known as the One Hundred Year Study on Artificial Intelligence (or AI100). This was intended to study the long-term implications of artificial intelligence in all aspects of work, life and play – providing guidance on the ethical development of smart software, sensors and machines. The team behind AI100 has now published the results of their first investigation, titled: "Artificial Intelligence and Life in 2030."

"We believe specialised AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life," said Peter Stone, a computer scientist from the University of Texas at Austin and chair of the 17-member panel of international experts. "But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared."

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

"AI technologies can be reliable and broadly beneficial," Grosz said. "Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion."

The report investigates eight areas of human activity in which AI technologies are already beginning to affect urban life, in ways that will become increasingly pervasive and profound by 2030. The 28,000-word study includes a glossary to help non-technical readers understand new AI applications – such as how computer vision might help screen tissue samples for cancers, for example, or how natural language processing will enable computers to grasp not simply the literal definitions, but the connotations and intent, behind words.

"It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared," the researchers write in their report, noting the need for public discourse. "Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies. [...] Who is responsible when a self-driven car crashes, or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?"

 

AI 2030 future technology

 

The eight sections discuss:

• Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

• Home/service robots: Like the robotic vacuum cleaners already in some homes, specialised robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

• Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

• Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

• Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organise and deliver media in more engaging, personalised and interactive ways.

• Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

• Public safety and security: Cameras, drones and software to analyse crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

• Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

"Until now, most of what is known about AI comes from science fiction books and movies," Stone says. "This study provides a realistic foundation to discuss how AI technologies are likely to affect society." Meanwhile, Grosz said she hopes the AI 100 report "initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies."

The full report can be downloaded at https://ai100.stanford.edu/sites/default/files/ai_100_report_0831fnl.pdf

You can listen to a podcast below:

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

7th September 2016

Technology vs. Humanity: The coming clash between man and machine

In his latest book, Technology vs. Humanity, futurist Gerd Leonhard asks the question: "How can society stay in control as machines enter deeper into our lives, our bodies, and eventually our brains?"

 

 

 

Fast Future Publishing has announced the launch of a groundbreaking new book by futurist and humanist Gerd Leonhard. This will explore the critical challenges and choices we face in balancing mankind's urge to upgrade and automate everything (including human biology itself) with our quest for freedom and happiness.

Technology vs. Humanity is the second in the company's "FutureScapes" series of books, looking at the core issues and ideas shaping mankind's future. The book is available at fastfuturepublishing.com and there is a 20% pre-launch discount for purchases made before the September 8th launch date.

The ever-accelerating pace of technology has driven a migration from the mainframe to the desktop, to the laptop, to the smartphone, to wearables and soon to brain-computer interfaces. As we blur the distinction between human and machine with implants and ingestible inserts, Gerd Leonhard makes a last-minute clarion call for an honest debate and a more philosophical exchange on what society needs and wants, and how best to steer the relentless pace of innovation.

Leonhard argues that, "Before it's too late, we must stop and ask the big questions: How do we embrace technology without becoming it? How do we ensure all technological progress is geared towards the service of humanity? When it happens—gradually, then suddenly—the machine era will create the greatest watershed in human life on Earth, and we as humans have to be in control of it."

 

technology vs humanity book

 

Leonhard puts a spotlight on key issues and developments that will shape our future world:

• What are the technological "megashifts" that will transform life, work, business, the economy, and government?

• Are we approaching the end of work-as-we-know-it?

• Will scientific advances enable the next generation to live for centuries?

• Why don't Big Data, the Internet of Things, and Artificial Intelligence have the same kind of global governance policies and standards that we've demanded and imposed on previous technological revolutions, such as nuclear power?

• How can we address the urgent need for "digital ethics" before Silicon Valley assumes control of the species previously known as Homo sapiens?

If we are, indeed, the last fully "human-only" generation in history, shouldn't 2016 see the beginning of a conversation about where all this is leading? Gerd asks: What moral values are we prepared to stand up for – before being "human" alters its meaning forever?

Technology vs. Humanity by Gerd Leonhard is published 8th September by Fast Future Publishing.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

26th August 2016

World's first commercial drone delivery service

Domino's Pizza Enterprises Limited has joined forces with a global leader in drone deliveries, Flirtey, to launch the first commercial drone delivery service in the world.

 

 

 

Domino's Pizza Enterprises Limited (Domino's) has joined forces with a global leader in drone deliveries, Flirtey to launch the first commercial drone delivery service in the world. The two companies exhibited the first stage of their partnership with a demonstration of pizza delivery by drone yesterday in Auckland, New Zealand. The successful demonstration was also attended by the Civil Aviation Authority (CAA) and Minister of Transport Simon Bridges.

The test was conducted under Civil Aviation Rules Part 101 and marks a final step in Flirtey's approval process – following which, the partnership will aim to connect people with pizza via CAA-approved trial store-to-door drone deliveries from a selected Domino's New Zealand store with flights to customer homes later this year.

New Zealand was selected as the launch market given that its current regulations allow for businesses to embrace unmanned aircraft opportunities, which enable the gradual testing of new and innovative technologies. Domino's Group CEO and Managing Director, Don Meij said the company's growth in recent years had led to a significant increase in the number of deliveries and that Domino's is constantly looking for innovative and futuristic ways to improve its service.

"With the increased number of deliveries we make each year, we were faced with the challenge of ensuring our delivery times continue to decrease and that we strive to offer our customers new and progressive ways of ordering from us," he said. "Research into different delivery methods led us to Flirtey. Their success within the airborne delivery space has been impressive and it's something we have wanted to offer our customers."

The use of drones as a delivery method is designed to work alongside Domino's current delivery fleet and will be fully integrated into online ordering and GPS systems.

"Domino's is all about providing customers with choice and making customer's lives easier. Adding innovation such as drone deliveries means customers can experience cutting-edge technology and the convenience of having their Supreme pizza delivered via air to their door. This is the future. We have invested heavily to provide our stores with different delivery fleet options – such as electric scooters, e-bikes and even the Domino's Robotic Unit - DRU that we launched earlier this year.

"We've always said that it doesn't make sense to have a 2-tonne machine delivering a 2-kilogram order. DRU DRONE is the next stage of the company's expansion into the artificial intelligence space and gives us the ability to learn and adopt new technologies in the business."

The Flirtey delivery drone is constructed from carbon fibre, aluminium and 3D printed components. It is a lightweight, autonomous and electrically driven unmanned aerial vehicle. It lowers its cargo via tether and has built-in safety features such as low battery return to safe location and auto-return home in case of low GPS signal or communication loss.

 

worlds first commercial drone delivery service

 

The reach that a drone offers is greater than other current options which are restricted by traffic, roads and distance. Domino's will look to the results of the trial to determine where drones are implemented further.

"What drones allow us to do is to extend that delivery area by removing barriers such as traffic and access, as well as offering a much faster, safer delivery option, which means we can deliver further afield than we currently do to our rural customers while reaching our urban customers in a much more efficient time."

The trial flights are set to commence later this year following the beginning of daylight savings in New Zealand. Domino's will offer Drone Delivery Specials at the launch of the trial with plans to extend the dimensions, weight and distance of deliveries, based on results and customer feedback.

"These trial deliveries will help provide the insight we need to extend the weight carried by the drone and distance travelled," said Meij. "It is this insight that we hope will lead to being able to consider a drone delivery option for the majority of our orders. We are planning a phased trial approach which is based on the CAA granting approval, as both Domino's and Flirtey are learning what is possible with the drone delivery for our products – but this isn't a pie in the sky idea. It's about working with the regulators and Flirtey to make this a reality."

Flirtey CEO Matt Sweeny said: "Launching the first commercial drone delivery service in the world is a landmark achievement for Flirtey and Domino's, heralding a new frontier of on-demand delivery for customers across New Zealand and around the globe. New Zealand has the most forward-thinking aviation regulations in the world, and with our new partnership, we are uniquely positioned to bring the same revolutionary drone delivery service to customers globally. We are getting closer to the time where you can push a button on your smartphone and have Domino's delivered by drone to your home."

Domino's is looking at opportunities for drone delivery trials in its six other markets – Australia, Belgium, France, The Netherlands, Japan and Germany.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th August 2016

World's first public trial of self-driving taxi

A company in Singapore is conducting the world's first public trial of a self-driving taxi. If successful, the service will be launched in 2018.

 

self-driving taxi singapore technology

 

nuTonomy, a company developing state-of-the art software for self-driving cars, today launched the first-ever public trial of a robo-taxi service. The trial, which will continue on an on-going basis, is being held within Singapore's "one-north", a 2.5-square-mile business district where nuTonomy has been conducting daily autonomous vehicle (AV) testing since April.

Beginning today, select Singapore residents will be invited to use nuTonomy's ride-hailing smartphone app to book a no-cost ride in a nuTonomy self-driving car that employs the company's sophisticated software, which has been integrated with high-performance sensing and computing components. Rides will be provided in a Renault Zoe or Mitsubishi i-MiEV electric vehicle that nuTonomy has specially configured for autonomous driving. An engineer will ride in the vehicle to observe system performance and assume control if needed to ensure passenger comfort and safety.

Throughout the trial, nuTonomy will collect and evaluate valuable data related to software system performance, vehicle routing efficiency, the vehicle booking process, and the overall passenger experience. This data will enable nuTonomy to refine its software in preparation for the launch of a widely-available commercial robo-taxi service in Singapore from 2018.

 

 

 

Earlier this month, nuTonomy was selected by the Singapore Land Transport Authority (LTA) as an R&D partner, to support the development of a commercial AV service in Singapore. This trial represents the first, rapid result of that partnership. nuTonomy is the first, and to date only, private enterprise approved by the Singapore government to test AVs on public roads.

CEO and co-founder of nuTonomy, Karl Iagnemma, said: "nuTonomy's first-in-the-world public trial is a direct reflection of the level of maturity that we have achieved with our AV software system. The trial represents an extraordinary opportunity to collect feedback from riders in a real-world setting, and this feedback will give nuTonomy a unique advantage as we work toward deployment of a self-driving vehicle fleet in 2018."

Autonomous taxis could eventually reduce the number of cars on Singapore's roads from 900,000 to 300,000, according to Doug Parker, the firm's chief operating officer: "When you are able to take that many cars off the road, it creates a lot of possibilities. You can create smaller roads, you can create much smaller car parks. I think it will change how people interact with the city going forward."

In May of this year, nuTonomy completed a $16m Series A funding led by Highland Capital Partners that included participation from Fontinalis Partners, Signal Ventures, Samsung Ventures, and EDBI, the dedicated corporate investment arm of the Singapore Economic Development Board.

In addition to Singapore, nuTonomy is operating self-driving cars in Michigan and the United Kingdom, where it tests software in partnership with major automotive manufacturers such as Jaguar Land Rover.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

15th August 2016

Computer program learns to replicate human handwriting

Researchers at University College London have devised a software algorithm able to scan and replicate almost anyone's handwriting.

 

computer handwriting

 

In a world increasingly dominated by the QWERTY keyboard, computer scientists at University College London (UCL) have developed software which may spark the comeback of the handwritten word, by analysing the handwriting of any individual and accurately replicating it.

The scientists have created "My Text in Your Handwriting" – a programme which semi-automatically examines a sample of a person's handwriting that can be as little as one paragraph, and generates new text saying whatever the user wishes, as if the author had handwritten it themselves.

"Our software has lots of valuable applications," says lead author, Dr Tom Haines. "Stroke victims, for example, may be able to formulate letters without the concern of illegibility, or someone sending flowers as a gift could include a handwritten note without even going into the florist. It could also be used in comic books where a piece of handwritten text can be translated into different languages without losing the author's original style."

Published in ACM Transactions on Graphics, the machine learning algorithm is built around glyphs – a specific instance of a character. Authors produce different glyphs to represent the same element of writing – the way one individual writes an "a" will usually be different to the way others write an "a". Although an individual's writing has slight variations, every author has a recognisable style that manifests in their glyphs and spacing. The software learns what is consistent across an individual's style and reproduces this.

 

computer handwriting

 

To generate an individual's handwriting, the software analyses and replicates the author's specific character choices, pen-line texture, colour and the inter-character ligatures (the joining-up between letters), as well as vertical and horizontal spacing.

Co-author, Dr Oisin Mac Aodha (UCL Computer Science), said: "Up until now, the only way to produce computer-generated text that resembles a specific person's handwriting would be to use a relevant font. The problem with such fonts is that it is often clear that the text has not been penned by hand, which loses the character and personal touch of a handwritten piece of text. What we've developed removes this problem and so could be used in a wide variety of commercial and personal circumstances."

The system is flexible enough that samples from historical documents can be used with little extra effort. Thus far, the scientists have analysed and replicated the handwriting of such figures as Abraham Lincoln, Frida Kahlo and Arthur Conan Doyle. Infamously, Conan Doyle never actually wrote Sherlock Holmes as saying, "Elementary my dear Watson" but the team have produced evidence to make you think otherwise.

To test the effectiveness of their software, the research team asked people to distinguish between handwritten envelopes and ones created by their automatic software. People were tricked by the computer-generated writing up to 40% of the time. Given how convincing it can be, some may believe this method could help in forging documents – but the team explained it works both ways and could actually help in detecting forgeries.

"Forgery and forensic handwriting analysis are still almost entirely manual processes – but by taking the novel approach of viewing handwriting as texture-synthesis, we can use our software to characterise handwriting to quantify the odds that something was forged," explained Dr Gabriel Brostow, senior author. "For example, we could calculate what ratio of people start their 'o's' at the bottom versus the top and this kind of detailed analysis could reduce the forensics service's reliance on heuristics."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

14th July 2016

Robots could build giant telescopes in space

Researchers have published a new concept for space telescope design that uses a modular structure and robot to build an extremely large telescope in space, faster and more efficiently than human astronauts.

 

robot space telescope construction

 

Enhancing astronomers' ability to peer ever more deeply into the cosmos may hinge on developing larger space-based telescopes. A new concept in space telescope design makes use of a modular structure and an assembly robot to build an extremely large telescope in space, performing tasks that would be too difficult, expensive, or time-consuming for human astronauts.

The Robotically Assembled Modular Space Telescope (RAMST) is described by Nicolas Lee and his colleagues at the California Institute of Technology and the Jet Propulsion Laboratory in an article published this week by the Journal of Astronomical Telescopes, Instruments, and Systems (JATIS).

Ground-based telescopes, while very large and powerful, are limited by atmospheric effects and their fixed location on Earth. Space-based telescopes do not have those problems – but have other limits, such as launch vehicle volume and mass capacity. A new modular space telescope that overcomes restrictions on volume and mass could allow telescope components to be launched incrementally, enabling the design and deployment of truly enormous space telescopes.

The Hubble Space Telescope features a mirror diameter of 2.4 m (7.8 ft). Its successor, the James Webb Telescope – due for launch in 2018 – will be nearly triple this size at 6.5 m (23 ft). A longer-term proposal known as the Advanced Technology Large-Aperture Space Telescope (ATLAST) would be even larger, with a mirror up to 16 m (52 ft) in width. The future concept by Lee and his colleagues, however, would dwarf all of these, spanning 100 m (328 ft). This would be powerful enough to obtain detailed views of exoplanets in other star systems, as well as images from the deep universe with phenomenal clarity.

 

future space telescopes timeline

 

The team's paper, "Architecture for in-space robotic assembly of a modular space telescope," focuses primarily on a robotic system to perform tasks in which astronaut fatigue would be a problem. The observatory would be constructed in Earth orbit and operated at the Sun–Earth Lagrange Point 2.

"Our goal is to address the principal technical challenges associated with such an architecture, so that future concept studies addressing a particular science driver can consider robotically assembled telescopes in their trade space," the authors write.

The main features of their proposed architecture include a mirror built with a modular structure, a general-purpose robot to put the telescope together and provide ongoing servicing, and advanced metrology technologies to support the assembly and operation of the telescope. An optional feature is the potential ability to fly the unassembled components of the telescope in formation. The system architecture is scalable to a variety of telescope sizes and would not be limited to particular optical designs.

"The capability to assemble a modular space telescope has other potential applications," says Harley Thronson, a senior scientist for Advanced Astrophysics Concepts at NASA's Goddard Space Flight Centre. "For example, astronomers using major ground-based telescopes are accustomed to many decades of operation, and the Hubble Space Telescope has demonstrated that this is possible in space if astronauts are available. A robotic system of assembly, upgrade, repair, and resupply offers the possibility of very long useful lifetimes of space telescopes of all kinds."

 

future space telescope

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

30th May 2016

MasterCard unveils the first commerce application for humanoid robot Pepper

Customers at Pizza Hut restaurants in Asia will soon get the chance to have their order taken by a robot.

 

mastercard pepper robot commerce

 

MasterCard has unveiled the first commerce application for SoftBank Robotics' humanoid robot Pepper. The application will be powered by MasterPass, the global digital payment service from MasterCard that connects consumers with merchants, enabling them to make fast, simple, and secure digital payments across channels and devices. Pizza Hut Restaurants Asia P/L will be the inaugural launch partner working together with MasterCard to create innovative customer engagement with Pepper.

A major first step in bringing conversational commerce experiences to merchants and consumers, this new app will extend the robot's ability to integrate customer service, access to information and sales into a seamless and consistent user experience. Pizza Hut Asia will be piloting the Pepper robot for order-taking and personalised engagement in its stores by the end of 2016.

"Consumers have come to expect personalised service, customised offers, and simple and seamless processes both in-store and online," said Tobias Puehse, Vice President for Innovation Management, Digital Payments & Labs at MasterCard. "The app's goal is to provide consumers with a more memorable and personalised shopping experience beyond today's self-serve machines and kiosks, by combining Pepper's intelligence with a secure digital payment experience via MasterPass."

 

mastercard pepper robot commerce 2016 technology

 

The robot will be installed in "between six and ten stores in Asia this year," said John Sheldon, Global SVP, Innovation Management, MasterCard Labs. Pepper can speak 19 languages and will "add more intelligence to kiosk ordering. Pepper guides you through the process of placing the order and can answer nutritional questions and communicate any specials."

A customer will be able to initiate an engagement by simply greeting Pepper and pairing their MasterPass account by either tapping the Pepper icon within the wallet or by scanning a QR code on the tablet that the robot holds. After pairing with MasterPass, Pepper can assist cardholders by providing personalised recommendations and offers, additional information on products, or assistance in checking out and paying for items. Pepper will initiate, approve and complete a transaction by connecting to MasterPass via a Wi-Fi connection and the entire transaction happens within the wallet.

Pepper has a number of human-like features. The robots "are intentionally designed to convey emotion," using sensors and cameras "to interpret the emotional state of the person they are interacting with and the cameras that it's using are evaluating the behaviour." For example, if the customer is excited and animated, so, too, would be Pepper. If the customer's movements are more muted, "then it would instead respond with a lot calmer and smaller gestures, so as to put that person at ease." If the customer gives his or her permission, the robot can remember their order history and ask if they want the same food or drink this time.

"We are excited to welcome Pepper to the Pizza Hut family," said Vipul Chawla, Managing Director of Pizza Hut Restaurants Asia. "Core to our digital transformation journey is the ability to make it easier for customers to engage, connect and transact with Pizza Hut. With an order-and-payment-enabled Pepper, customers can now come to expect personalised ordering, reduce wait time for carryout, and have a fun, frictionless user experience."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

 
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed