AI

Building the hardware for the next generation of artificial intelligence

Building the hardware for the next generation of artificial intelligence

On a recent Monday morning, Vivienne Sze, an associate professor of electrical engineering and computer science at MIT, spoke with enthusiasm about network architecture design. Her students nodded slowly, as if on the verge of comprehension. When the material clicked, the nods grew in speed and confidence. “Everything crystal clear?” she asked with a brief pause and a return nod before diving back in.

This new course, 6.S082/6.888 (Hardware Architecture for Deep Learning), is modest in size — capped at 25 for now — compared to the bursting lecture halls characteristic of other MIT classes focused on machine learning and artificial intelligence. But this course is a little different. With a long list of prerequisites and a heavy base of assumed knowledge, students are jumping into deep water quickly. They blaze through algorithmic design in a few weeks, cover the terrain of computer hardware design in a similar period, then get down to the real work: how to think about making these two fields work together.

The goal of the class is to teach students the interplay between two traditionally separate disciplines, Sze says. “How can you write algorithms that map well onto hardware so they can run faster? And how can you design hardware to better support the algorithm?” she asks rhetorically. “It’s one thing to design algorithms, but to deploy them in the real world you have to consider speed and energy consumption.”

“We are beginning to see tremendous student interest in the hardware side of deep learning,” says Joel Emer, who co-teaches the course with Sze. A professor of the practice in MIT’s Department of Electrical Engineering and Computer Science, and a senior distinguished research scientist at the chip manufacturer NVidia, Emer has partnered with Sze before. Together they wrote a journal article that provides a comprehensive tutorial and survey coverage of recent advances toward enabling efficient processing of deep neural networks. It is used as the main reference for the course.

In 2016, their group unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices. The groundbreaking chip, called “Eyeriss,” could also help usher in the internet of things.

“I’ve been in this field for more than four decades. I’ve never seen an area with so much excitement and promise in all that time,” Emer says. “The opportunity to have an original impact through building important and specialized architecture is larger than anything I’ve seen before.”

Hardware at the heart of deep learning

Deep learning is a new name for an approach to artificial intelligence called neural networks, a means of doing machine learning in which a computer learns to perform some tasks by analyzing training examples. Today, popular applications of deep learning are everywhere, Emer says. The technique drives image recognition, self-driving cars, medical image analysis, surveillance and transportation systems, and language translation, for instance.

The value of the hardware at the heart of deep learning is often overlooked, says Emer. Practical and efficient neural networks, which computer scientists have researched off and on for 60 years, were infeasible without hardware to support deep learning algorithms. “Many AI accomplishments were made possible because of advances in hardware,” he says. “Hardware is the foundation of everything you can do in software.”

Deep learning techniques are evolving very rapidly, Emer says. “There is a direct need for this sort of hardware. Some of the students coming out of the class might be able to contribute to that hardware revolution.”

Meanwhile, traditional software companies like Google and Microsoft are taking notice, and investing in more custom hardware to speed up the processing for deep learning, according to Sze.

“People are recognizing the importance of having efficient hardware to support deep learning,” she says. “And specialized hardware to drive the research forward. One of the greatest limitations of progress in deep learning is the amount of computation available.”

New hardware architectures

Real-world deployment is key for Skanda Koppula, a graduate student in electrical engineering and computer science. He is a member of the MIT Formula SAE Race Car Electronics Team.

“We plan to apply some of these ideas in building the perception systems for a driverless Formula student race car,” he says. “And in the longer term, I see myself working toward a doctorate in related fields.”

Valerie Sarge, also a graduate student in electrical engineering and computer science, is taking the course in prepration for a career that involves creating hardware for machine learning applications.

“Deep learning is a quickly growing field, and better hardware architectures have the potential to make a big impact on researchers’ ability to effectively train networks,” she says. “Through this class, I’m gaining some of the skills I need to contribute to designing these architectures.”


Published at Fri, 01 Dec 2017 04:59:59 +0000

11 0

AI Trends Interviews Martin Mrugal, Chief Innovation Officer, NA, SAP

AI Trends Interviews Martin Mrugal, Chief Innovation Officer, NA, SAP

Eliot Weinman, AI Trends Executive Editor and John Desmond, AI Trends Editor, recently had an opportunity to interview Martin Mrugal. As chief innovation officer at SAP North America, Marty Mrugal is responsible for SAP’s innovation agenda in he U.S. including the Chief Customer Office, Solution Engineering, Industry and Value Engineering, and the Customer Center of Excellence (CoE) organizations. As the executive sponsor for SAP S/4HANA, Marty is responsible for the North American launch, customer adoption, and success of SAP’s next generation computing platform. Since joining SAP in 1998, Marty has held a number of diverse management and executive leadership roles. He recently took a few minutes to speak with AI Trends.

Q. What is SAP’s strategic view of AI?

Marty: Our strategic view is to build an intelligent enterprise to unite the human expertise and digital insights. Thanks to the trust our customers have instilled in us, we have the largest pool of enterprise data in the world. We have a 45-year history of business innovation and operate in 190 countries. SAP innovations help 365,000 customers worldwide work together more efficiently and use business insight more effectively. We believe we are uniquely positioned to transform business with AI and neural networks, to understand all the information in real time and put decisions at the user’s fingertips. Our architecture and data strategy enables us to drive AI into the enterprise.

Also, SAP Leonardo is our innovation platform. It can sit side by side existing applications and we are using it to embed AI into applications. This helps solve one of the big challenges in AI which is adoption. We are building AI into mission-critical applications to drive the intelligent enterprise.

editors note: SAP Machine Learning whitepaper

Q. Can you elaborate on the data you have?

Marty: Some 76% of the world’s transaction revenue touches an SAP system, including ERP systems, mission-critical systems. For many clients, the system of record is SAP. They can now take that information into the intelligent enterprise and continuously improve.

Q. What is the update on the S/4HANA cloud release?

Marty: In 2017, we have made announcements around contextual analytics, machine learning, digital assistant capability for S/4HANA, which is our next-generation intelligent enterprise platform.

We now have 7,000 customers, which makes S/4HANA the fastest growing application in the history of the company, with adoption by large and small companies. It positions a company to take advantage of data, intelligence and next-generation analytics. It is a platform for innovation.

S/4 HANA can be deployed on premise, through the public cloud or through a private cloud.

Q. What are some innovative ways customers are applying AI?

Marty: BASF, the largest chemical producer in the world, has taken SAP Cash Application and applied it to increase efficiency in their finance organization by improving the collections process and improving cash flow for accounts receivable. With traditional applications, you can do an invoice matchup with a 40% average. With more intelligent algorithms, you might be able to get to 70%. Then they applied machine learning to finance and accounts receivable and it went to 90% plus. [Invoice matching refers to matching an invoice for payment.]

BASF will tell you to do it the old way in a rules-based environment is difficult. Rules get outdated; they are hard to maintain. With machine learning, you are continually applying new rules that are integrated into their finance suite.

We have a roadmap across our finance processes and applications, where we identify where machine learning can provide breakthrough innovation for our clients.

We call it Lights Out Finance. You can reduce the amount of keyboard typing or retyping to machine learning techniques applied to finance processes.

Q. Is that to automate the process to save time or to accelerate payment collection?

Marty: It’s to improve the integrity, quality, efficiency of data. You no longer need so many accounts receivable resources to do manual intervention. You can start slowly and build up, as you determine your confidence level. Where do we have the highest degree of mesh? We concentrate there. When BASF got to 94% matching, it shocked them.

Q. What will be the impact on the BASF workforce?

Marty: It frees up resources to work on other projects. I compare the impact of AI and machine learning on people in the workforce to agriculture. In the 1800s, agriculture employed about 90% of US workers. By 1910, it was less than 20%. Today it’s about 2% of the workforce working in agriculture. And just as innovation helped agriculture [to be as or more productive with fewer workers], there are now new opportunities for employment. I don’t believe in the doomsday predictions about AI’s impact on the workforce. We know that in every industrial revolution new types of jobs emerge while other types fade away. We must remember that we are in control and we can make wise choices when it comes to what we automate and how fast we automate it. One choice leaders must make is to use technology to amplify human potential – not diminish it.

Q. Are you positioning S/4HANA against cloud services offers from Google, Amazon, IBM and Microsoft?

Marty: We have an SAP Cloud Platform supporting our applications. Customers can build on that and partners can build on that, to complement the applications we deliver. When you think about that in relation to Google, Amazon and IBM, what separates us is that we are interoperable. We have a great partnership with Google for instance on shared libraries, such as in TensorFlow [an open source software library for dataflow programming].

The SAP Cloud Platform can run on Amazon (AWS), on IBM, on [Microsoft] Azure. Our platform is interoperable across those platforms; all those companies are partners of ours. It makes the partnerships stronger that we can interoperate.

We have opened up our APIs in order to expand our platform. Our clients want us to be an open platform, to integrate across our key suppliers, like Amazon and Google. So that’s a big differentiation for us.

Q. Can you talk about another innovative application from a customer?

Marty: A lot of innovation with AI is happening around computer vision. We built an application, SAP Brand Impact, that analyzes the brand exposure in video and images by leveraging advanced computer vision techniques and AI. Traditionally, it has been a labor-intensive process to analyze video to identify brand air time and how many eyeballs viewed frames and segments, in order to calculate brand exposure.

We have taken that and applied computer vision so that you can take a soccer game broadcast, or a Formula One race, and quickly calculate the brand exposure. As a long-term SAP customer, Audi [automaker] got early access to the latest SAP solution to produce some statistics on media analysis workflow. We have also leveraged our partnership with NVIDIA on deep learning, and found the solution was extremely valuable to measure their brand exposure with a high level of accuracy.

editors note: see  the upcoming Nvidia & SAP webinar

For retail, we are also looking at video analysis of store shelves, in order to perform auto-replenishment when there is a stock out. We’re working with a shoe company in China, the Aimickey Shoe Company, which is using machine learning to help customers design their own shoes. They can put on a virtual reality headset and see what the shoe will look like on their foot before they order it. The customer can do a 3D foot scan for the shoe size, select the color, and the shoes are delivered within a week. The shoe company then takes that ordering information to determine hot trends, accelerating their design process, and helping them to manage demand and forecasts.

We are also working with some clients on using AI for resume matching. We are finding that when you look at resumes, you have an inherent bias. When you start to use machine learning, you get continually better at it and can really minimize the bias around the screening process, in order to identify the best candidates. This has huge potential. We are working it into our success factors for human resources.

The long and short of it is we are really bullish on AI across the entire enterprise. We are selecting areas where we think it will have the biggest impact first, and working on those.

Q. What would you describe as the current AI blueprint at SAP? Is there any acquisition strategy?

Marty: We have built a culture of innovation here. We are a 45-year-old high tech company. Our products have come from organic innovation as well as acquisitions. When I think of our blueprint going forward, I think of AI across industries. We look for repetitive, time-consuming tasks, where there are difficulties in business processes, where there is excessive data.

Then we look for where there is real-time decision-making, where we can really increase performance. We talked about the Lights Out finance concept.

Another machine learning and deep learning area we are working on is natural language processing. We have built our own digital assistant, called SAP CoPilot, which will be our digital assistant for the enterprise. You can say for example, show me the inventory for the Nike sneaker brand, and it will pull it up. It is under development. It will work on-premise as well as in the cloud. It will integrate with the Siri [from Apple] and Alexa [from Google] consumer natural language process systems. Right now users interact with it through computers; we are thinking about a hardware incarnation. We understand the enterprise, so our marriage with these companies works.

Thank you Marty!

This article is original to AI Trends and copyright © 2017, all rights reserved.

Published at Tue, 28 Nov 2017 15:05:36 +0000

14 0

Artificial muscles give soft robots superpowers

Artificial muscles give soft robots superpowers

[unable to retrieve full-text content]

[unable to retrieve full-text content]

Researchers have created origami-inspired artificial muscles that add strength to soft robots, allowing them to lift objects that are up to 1,000 times their own weight using only air or water pressure. Consisting of an inner ‘skeleton’ folded into a specific pattern and enclosed within a plastic or fabric ‘skin’ bag, these muscles can perform a greater variety of tasks and are safer than other models.
Published at Mon, 27 Nov 2017 20:21:03 +0000

18 0

How badly do you want something? Babies can tell

How badly do you want something? Babies can tell

Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.

This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.

“Infants are far from experiencing the world as a ‘blooming, buzzing confusion,’” says lead author Shari Liu, referring to a description by philosopher and psychologist William James about a baby’s first experience of the world. “They interpret people’s actions in terms of hidden variables, including the effort [people] expend in producing those actions, and also the value of the goals those actions achieve.”

“This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a core member of the joint MIT-Harvard Center for Brains, Minds and Machines (CBMM), and one of the paper’s authors.

Tenenbaum helped to direct the research team along with Elizabeth Spelke, a professor of psychology at Harvard University and CBMM core member, in whose lab the research was conducted. Liu, the paper’s lead author, is a graduate student at Harvard. CBMM postdoc Tomer Ullman is also an author of the paper, which appears in the Nov. 23 online edition of Science.

To evaluate infants’ intuition regarding what other people value, researchers showed them videos in which an agent (red bouncing ball) decides whether it’s worth the effort to leap over an obstacle to reach a goal (blue cartoon character). (Courtesy of the researchers)

Calculating value

Previous research has shown that adults and older children can infer someone’s motivations by observing how much effort that person exerts toward obtaining a goal.

The Harvard/MIT team wanted to learn more about how and when this ability develops. Babies expect people to be consistent in their preferences and to be efficient in how they achieve their goals, previous studies have found. The question posed in this study was whether babies can combine what they know about a person’s goal and the effort required to obtain it, to calculate the value of that goal.

To answer that question, the researchers showed 10-month-old infants animated videos in which an “agent,” a cartoon character shaped like a bouncing ball, tries to reach a certain goal (another cartoon character). In one of the videos, the agent has to leap over walls of varying height to reach the goal. First, the babies saw the agent jump over a low wall and then refuse to jump over a medium-height wall. Next, the agent jumped over the medium-height wall to reach a different goal, but refused to jump over a high wall to reach that goal.

The babies were then shown a scene in which the agent could choose between the two goals, with no obstacles in the way. An adult or older child would assume the agent would choose the second goal, because the agent had worked harder to reach that goal in the video seen earlier. The researchers found that 10-month-olds also reached this conclusion: When the agent was shown choosing the first goal, infants looked at the scene longer, indicating that they were surprised by that outcome. (Length of looking time is commonly used to measure surprise in studies of infants.)

The researchers found the same results when babies watched the agents perform the same set of actions with two different types of effort: climbing ramps of varying incline and jumping across gaps of varying width.

“Across our experiments, we found that babies looked longer when the agent chose the thing it had exerted less effort for, showing that they infer the amount of value that agents place on goals from the amount of effort that they take toward these goals,” Liu says.

The findings suggest that infants are able to calculate how much another person values something based on how much effort they put into getting it.

“This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,” says Tenenbaum, who is also affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory.

Studies of infants can reveal deep commonalities in the ways that we think throughout our lives, suggests Spelke. “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people’s actions,” she says. 

The study shows, for the first time, that “preverbal infants can look at the world like economists,” says Gergely Csibra, a professor of cognitive science at Central European University in Hungary. “They do not simply calculate the costs and benefits of others’ actions (this had been demonstrated before), but relate these terms onto each other. In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”

Modeling intelligence

Over the past 10 years, scientists have developed computer models that come close to replicating how adults and older children incorporate different types of input to infer other people’s goals, intentions, and beliefs. For this study, the researchers built on that work, especially work by Julian Jara-Ettinger PhD ’16, who studied similar questions in preschool-age children. The researchers developed a computer model that can predict what 10-month-old babies would infer about an agent’s goals after observing the agent’s actions. This new model also posits an ability to calculate “work” (or total force applied over a distance) as a measure of the cost of actions, which the researchers believe babies are able to do on some intuitive level.

“Babies of this age seem to understand basic ideas of Newtonian mechanics, before they can talk and before they can count,” Tenenbaum says. “They’re putting together an understanding of forces, including things like gravity, and they also have some understanding of the usefulness of a goal to another person.”

Building this type of model is an important step toward developing artificial intelligence that replicates human behavior more accurately, the researchers say.

“We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.”

Still unanswered are the questions of exactly how and when these intuitive abilities arise in babies.

“Do infants start with a completely blank slate, and somehow they’re able to build up this sophisticated machinery? Or do they start with some rudimentary understanding of goals and beliefs, and then build up the sophisticated machinery? Or is it all just built in?” Ullman says.

The researchers hope that studies of even younger babies, perhaps as young as 3 months old, and computational models of learning intuitive theories that the team is also developing, may help to shed light on these questions.

This project was funded by the National Science Foundation through the Center for Brains, Minds, and Machines, which is based at MIT’s McGovern Institute for Brain Research and led by MIT and Harvard.


Published at Thu, 23 Nov 2017 19:00:10 +0000

11 0

Voice impersonators can fool speaker recognition systems

Voice impersonators can fool speaker recognition systems

[unable to retrieve full-text content]

[unable to retrieve full-text content]

Skilful voice impersonators are able to fool state-of-the-art speaker recognition systems, as these systems generally aren’t efficient yet in recognising voice modifications, according to new research. The vulnerability of speaker recognition systems poses significant security concerns.
Published at Tue, 14 Nov 2017 15:48:31 +0000

16 0