AI

Engineers program tiny robots to move, think like insects

Engineers program tiny robots to move, think like insects

[unable to retrieve full-text content]

[unable to retrieve full-text content]

While engineers have had success building tiny, insect-like robots, programming them to behave autonomously like real insects continues to present technical challenges. Engineers have recently been experimenting with a new type of programming that mimics the way an insect’s brain works, which could soon have people wondering if that fly on the wall is actually a fly.
Published at Thu, 14 Dec 2017 19:19:23 +0000

6 0

Gecko adhesion technology moves closer to industrial uses

Gecko adhesion technology moves closer to industrial uses

[unable to retrieve full-text content]

[unable to retrieve full-text content]

While human-made devices inspired by gecko feet have emerged in recent years, enabling their wearers to slowly scale a glass wall, the possible applications of gecko-adhesion technology go far beyond Spiderman-esque antics. A researcher is looking into how the technology could be applied in a high-precision industrial setting, such as in robot arms used in manufacturing computer chips.
Published at Wed, 13 Dec 2017 14:56:17 +0000

22 0

Engineers create artificial graphene in a nanofabricated semiconductor structure

Engineers create artificial graphene in a nanofabricated semiconductor structure

[unable to retrieve full-text content]

[unable to retrieve full-text content]

Experts at manipulating matter at the nanoscale have made an important breakthrough in physics and materials science. They have engineered “artificial graphene” by recreating, for the first time, the electronic structure of graphene in a semiconductor device.
Published at Tue, 12 Dec 2017 23:41:31 +0000

34 0

Four from MIT named 2017 Association for Computing Machinery Fellows

Four from MIT named 2017 Association for Computing Machinery Fellows

Today four MIT faculty were named among the Association for Computing Machinery’s 2017 Fellows for making “landmark contributions to computing.”

Honorees included School of Science Dean Michael Sipser and three researchers affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL): Shafi Goldwasser, Tomás Lozano-Pérez, and Silvio Micali.

The professors were among fewer than 1 percent of Association for Computing Machinery (ACM) members to receive the distinction. Fellows are named for contributions spanning such disciplines as graphics, vision, software design, algorithms, and theory.

“Shafi, Tomás, Silvio, and Michael are very esteemed colleagues and friends, and I’m so happy to see that their contributions have recognized with ACM’s most prestigious member grade,” said CSAIL Director Daniela Rus, who herself was named an ACM Fellow in 2014. “All of us at MIT are very proud of them for receiving this distinguished honor.”

Goldwasser was selected for “transformative work that laid the complexity-theoretic foundations for the science of cryptography.” This work has helped spur entire subfields of computer science, including zero-knowledge proofs, cryptographic theory, and probabilistically checkable proofs. In 2012 she received ACM’s Turing Award, often referred to as “the Nobel Prize of computing.”

Lozano-Pérez was recognized for “contributions to robotics, and motion planning, geometric algorithms, and their applications.” His current work focuses on integrating task, motion, and decision planning for robotic manipulation. He was a recipient of the 2011 IEEE Robotics Pioneer Award, and is also a 2014 MacVicar Fellow and a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and of the IEEE.

Like Goldwasser, Micali was also honored for his work in cryptography and complexity theory, including his pioneering of new methods for the efficient verification of mathematical proofs. His work has had a major impact on how computer scientists understand concepts like randomness and privacy. Current interests include zero-knowledge proofs, secure protocols, and pseudorandom generation. He has also received the Turing Award, the Goedel prize in theoretical computer science, and the RSA prize in cryptography.

Sipser, the Donner Professor of Mathematics, was recognized for “contributions to computational complexity, particularly randomized computation and circuit complexity.” With collaborators at Carnegie Mellon University, Sipser introduced the method of probabilistic restriction for proving super-polynomial lower bounds on circuit complexity, and this result was later improved by others to be an exponential lower bound. He is a fellow of the American Academy of Arts and Sciences and the American Mathematical Society, and a 2016 MacVicar Fellow. He is also the author of the widely used textbook, “Introduction to the Theory of Computation.”

ACM will formally recognize the fellows at its annual awards banquet on Saturday, June 23, 2018 in San Francisco, California.


Published at Mon, 11 Dec 2017 16:00:00 +0000

8 0

Insights on fast cockroaches can help teach robots to walk

Insights on fast cockroaches can help teach robots to walk

[unable to retrieve full-text content]

[unable to retrieve full-text content]

Scientists show for the first time that fast insects can change their gait — like a mammal’s transition from trot to gallop. These new insights could contribute to making the locomotion of robots more energy efficient.
Published at Fri, 08 Dec 2017 14:55:36 +0000

18 0

New robots can see into their future

New robots can see into their future

[unable to retrieve full-text content]

[unable to retrieve full-text content]

Researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.
Published at Mon, 04 Dec 2017 21:23:35 +0000

6 0

Try this! Researchers devise better recommendation algorithm

Try this! Researchers devise better recommendation algorithm

The recommendation systems at websites such as Amazon and Netflix use a technique called “collaborative filtering.” To determine what products a given customer might like, they look for other customers who have assigned similar ratings to a similar range of products, and extrapolate from there.

The success of this approach depends vitally on the notion of similarity. Most recommendation systems use a measure called cosine similarity, which seems to work well in practice. Last year, at the Conference on Neural Information Processing Systems, MIT researchers used a new theoretical framework to demonstrate why, indeed, cosine similarity yields such good results.

This week, at the same conference, they are reporting that they have used their framework to construct a new recommendation algorithm that should work better than those in use today, particularly when ratings data is “sparse” — that is, when there is little overlap between the products reviewed and the ratings assigned by different customers.

The algorithm’s basic strategy is simple: When trying to predict a customer’s rating of a product, use not only the ratings from people with similar tastes but also the ratings from people who are similar to those people, and so on.

The idea is intuitive, but in practice, everything again hinges on the specific measure of similarity.

“If we’re really generous, everybody will effectively look like each other,” says Devavrat Shah, a professor of electrical engineering and computer science and senior author on the paper. “On the other hand, if we’re really stringent, we’re back to effectively just looking at nearest neighbors. Or putting it another way, when you move from a friend’s preferences to a friend of a friend’s, what is the noise introduced in the process, and is there a right way to quantify that noise so that we balance the signal we gain with the noise we introduce? Because of our model, we knew exactly what is the right thing to do.”

All the angles

As it turns out, the right thing to do is to again use cosine similarity. Essentially, cosine similarity represents a customer’s preferences as a line in a very high-dimensional space and quantifies similarity as the angle between two lines.

Suppose, for instance, that you have two points in a Cartesian plane, the two-dimensional coordinate system familiar from high school algebra. If you connect the points to the origin — the point with coordinates (0, 0) — you define an angle, and its cosine can be calculated from the point coordinates themselves.

If a movie-streaming service has, say, 5,000 titles in its database, then the ratings that any given user has assigned some subset of them defines a single point in a 5,000-dimensional space. Cosine similarity measures the angle between any two sets of ratings in that space.

When data is sparse, however, there may be so little overlap between users’ ratings that cosine similarity is essentially meaningless. In that context, aggregating the data of many users becomes necessary.

The researchers’ analysis is theoretical, but here’s an example of how their algorithm might work in practice. For any given customer, it would select a small set — say, five — of those customers with the greatest cosine similarity and average their ratings. Then, for each of those customers, it would select five similar customers, average their ratings, and fold that average into the cumulative average. It would continue fanning out in this manner, building up an increasingly complete set of ratings, until it had enough data to make a reasonable estimate about the rating of the product of interest.

Filling in blanks

For Shah and his colleagues — first author Christina Lee PhD ’17, who is a postdoc at Microsoft Research, and two of her Microsoft colleagues, Christian Borgs and Jennifer Chayes — devising such an algorithm wasn’t the hard part. The challenge was proving that it would work well, and that’s what the paper concentrates on.

Imagine a huge 2-D grid that maps all of a movie-streaming service’s users against all its titles, with a number in each cell that corresponds to a movie that a given user has rated. Most users have rated only a handful of movies, so most of the grid is empty. The goal of a recommendation engine is to fill in the empty grid cells as accurately as possible.

Ordinarily, Shah says, a machine-learning system learns two things: the features of the data set that are useful for prediction, and the mathematical function that computes a prediction from those features. For purposes of predicting movie tastes, useful features might include a movie’s genre, its box office performance, the number of Oscar nominations it received, the historical box-office success of its leads, its distributor, or any number of other things.

Each of a movie-streaming service’s customers has his or her own value function: One might be inclined to rate a movie much more highly if it fits in the action genre and has a big budget; another might give a high rating to a movie that received numerous Oscar nominations and has a small, arty distributor.

Playing the odds

In the new analytic scheme, “You don’t learn features; you don’t learn functions,” Shah says. But the researchers do assume that each user’s value function stays the same: The relative weight that a user assigns to, say, genre and distributor doesn’t change. The researchers also assume that each user’s function is operating on the same set of movie features.

This, it turns out, provides enough consistency that it’s possible to draw statistical inferences about the likelihood that one user’s ratings will predict another’s.

“When we sample a movie, we don’t actually know what its feature is, so if we wanted to exactly predict the function, we wouldn’t be able to,” Lee says. “But if we just wanted to estimate the difference between users’ functions, we can compute that difference.”

Using their analytic framework, the researchers showed that, in cases of sparse data — which describes the situation of most online retailers — their “neighbor’s-neighbor” algorithm should yield more accurate predictions than any known algorithm.

Translating between this type of theoretical algorithmic analysis and working computer systems, however, often requires some innovative engineering, so the researchers’ next step is to try to apply their algorithm to real data.

“The algorithm they present is simple, intuitive, and elegant,” says George Chen, an assistant professor at Carnegie Mellon University’s Heinz College of Public Policy and Information Systems, who was not involved in the research. “I’d be surprised if others haven’t tried an algorithm that is similar, although Devavrat and Christina’s paper with Christian Borgs and Jennifer Chayes presents, to my knowledge, the first theoretical performance guarantees for such an algorithm that handles the sparse sampling regime, which is what’s most practically relevant in many scenarios.”


Published at Wed, 06 Dec 2017 05:00:00 +0000

10 0

New algorithm repairs corrupted digital images in one step

New algorithm repairs corrupted digital images in one step

[unable to retrieve full-text content]

[unable to retrieve full-text content]

Computer scientists have designed a new algorithm that incorporates artificial neural networks to simultaneously apply a wide range of fixes to corrupted digital images. The researchers tested their algorithm by taking high-quality, uncorrupted images, purposely introducing severe degradations, then using the algorithm to repair the damage. In many cases, the algorithm outperformed competitors’ techniques, very nearly returning the images to their original state.
Published at Tue, 05 Dec 2017 19:48:02 +0000

4 0

Helping hands guide robots as they learn

Helping hands guide robots as they learn

[unable to retrieve full-text content]

[unable to retrieve full-text content]

Researchers help humans and robots collaborate by enabling real-time interactions that modify a robot’s path to its goal. The study will help robots make the transition from structured factory floors to interactive tasks like rehabilitation, surgery and training programs in which environments are less predictable.
Published at Mon, 04 Dec 2017 14:49:28 +0000

14 0