Artificial intelligence systems developed up until now are categorized into two types: logic-based or probability-based. But now a researcher at MIT has created a new language, Church, that combines the best aspects of the two categories and promises to make AI smarter than ever.
Artificial intelligence researchers back in the 1950s thought of the human mind as a set of rules to be programmed and developed systems based on logical inferences, e.g., “if you know that birds can fly and are told that the waxwing is a bird, you can infer that waxwings can fly.”
But with rules-based AI, every exception had to be accounted for. The systems couldn’t figure out that there were types of birds that couldn’t fly; they had to be told so explicitly. Later AI models gave up these extensive rule sets and turned to probabilities: “a computer is fed lots of examples of something – like pictures of birds – and is left to infer, on its own, what those examples have in common.”
Church, a “grand unified theory of AI” developed by MIT researcher Noah Goodman, combines both systems, creating probability-based rules that are constantly revised as the system encounters new situations:
A Church program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99.99 percent. But as it learns more about cassowaries – and penguins, and caged and broken-winged robins – it revises its probabilities accordingly. Ultimately, the probabilities represent all the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself, over time – much the way humans learn new concepts and revise old ones.
Researchers think that Church’s fluidity will help it surpass current AI models, and in a test in which the system was charged to make predictions based on a set of observations, it did a “significantly better job of modeling human thought than traditional artificial intelligence algorithms did.” Church is still rough around the edges, and while it’s effective at specific operations it’s too “computationally intensive” to tackle broader brain-simulation at this point. But Goodman will continue working on the new system, and in the mean time, it will only be getting smarter.
Source: MIT, Gizmodo.
Pingback: MIT Researcher Develops Technology To Make AI Smarter | AboutAI.info
Pingback: Traduire RSS