Are Machines Learning To Think?

A fascinating piece in today’s New York Times describes some interesting findings that scientists working for Google made after they hooked up a huge network of computer processors:

MOUNTAIN VIEW, Calif. — Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.

The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.

“This is the hottest thing in the speech recognition field these days,” said Yann LeCun, a computer scientist who specializes in machine learning at the Courant Institute of Mathematical Sciences at New York University.

And then, of course, there are the cats.

To find them, the Google research team, led by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections. They then fed it random thumbnails of images, one each extracted from 10 million YouTube videos.

The videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.

(…)

While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.

“A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.

“It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” the researchers wrote.

Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.

“The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,” said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”

Now, obviously, there’s a massive difference between what the scientists detected here and the human brain and a computer algorithm, not matter how sophisticated, is not the same as a human brain. Indeed, we may find as we continue experiments like this that there is a difference between thinking and what we call consciousness and that it may never be possible for machines to make that leap into the conscious world. Or, maybe it will be easier than we can possibly imagine.

FILED UNDER: Science & Technology, , , , , , , , ,
Doug Mataconis
About Doug Mataconis
Doug Mataconis held a B.A. in Political Science from Rutgers University and J.D. from George Mason University School of Law. He joined the staff of OTB in May 2010 and contributed a staggering 16,483 posts before his retirement in January 2020. He passed far too young in July 2021.

Comments

  1. Herb says:

    While this is fairly impressive, it apparently needed “an array of 16,000 processors.” So yes, technically cool, but not really an efficient way of finding cat videos….

    One human brain is sufficient, and you don’t even have to be a mentat.

  2. This does seem within the known range of what neural nets can achieve. They have always excelled at pattern matching. 16,000 64-bit processors is a really big net by old standards, and so sure it matches more.

    I thought you were going to link to a different story, actually. This one has both glass half-full and glass half-empty arguments:

    What happened to Turing’s thinking machines?

    The pessimist:

    ”There are a couple of billion computers in the world and we do have enough computing power today if we pooled all of our resources to far outstrip a brain, but we don’t know how to organise it to do anything with it. It’s not just having the power, it’s knowing what to do with it.”

    And without insightful mathematical modelling, Pearl says, certain tasks would be impossible for AI to carry out, as the amount of data generated would rapidly scale to a point where it became unmanageable for any forseeable computing technology.

    The optimist:

    ”In terms of robotics we’re probably where the world of PCs were in the early 1970s, where you could buy a PC kit and if you were an enthusiast you could have a lot of fun with that. But it wasn’t a worthwhile investment for the average person. There wasn’t enough you could do that was useful. Within a decade that changed, your grandmother needed word processing or email and we rapidly went from a very small number of hobbyists to pervasive technology throughout society in one or two decades.

    ”I expect a similar sort of timescale for robotic technology to take off, starting roughly now.”

    I notice that the popular press loves the optimist more …

  3. Actually to put that “we don’t know how to organise it to do anything with it” in context of the main article, it’s pretty inefficient to dedicate 16,000 full blown computers to look for cats.

    It only takes one person, half-paying attention.

    Cue the mecanical turk.

  4. Jeremy says:

    I’m of the opinion that, while we will probably have very “smart” computers in the future, we will never have any “sentient” AI. Sure, we could have them run a decent emulation, possibly, but the day that a computer actually becomes sentient will either by never or a very, very distant date in the future. I just don’t think it’s possible, definitely not with current understandings of both computer science and just what the hell consciousness and sentience is.

    Of course, even if we did do it, it would never happen, because everyone would start screaming “SkyNet!” and mobs would descend on those poor developers and shut them down.

  5. grumpy realist says:

    @Jeremy: There’s also been some argumentation that in order to have actual intelligence you have to have a feedback loop between the “brain” and the physical world.
    Will be interesting to see what happens.

  6. bordenl says:

    I will agree with that. The machine did not know that what it had identified was a cat. It knew that it could distinguish cats from noncats, which is the beginning of concept formation.

  7. Ian says:

    Sarah Connor last seen looking over her shoulder nervously…

  8. mattb says:

    @Jeremy: From a cultural perspective (and this is pretty well documented) what is more likely to happen is that every time a machine approaches our understanding of “smart” or “thinking” we (humans) will redefine what it means to “think.”

    A wonderful example of this can be seen in the wake of recent Human vs. Computer match-ups. For years (coming to a head in the cold war) Chess represented the pinnacle of thinking. Then after Deep Blue, it was decided that winning at Chess was really about computing power and calculating probably outcomes. Now Go (aka Othello/Reversi) is the new model for a difficult game as it’s much harder to predict outcomes due to the wide range of permutations.

    Likewise, in the wake of Watson’s win in Jeopardy, many people pointed out how that game show was perfectly set up for a Watson win, as it really relied just on route memorization and an occasional bit of guessing. In other words, winning Jeopardy was no longer about being a “genius” — it was all about being good at data retrieval, you know… like a machine.

  9. mattb says:

    @bordenl:

    The machine did not know that what it had identified was a cat. It knew that it could distinguish cats from noncats, which is the beginning of concept formation.

    Completely correct from an “internal sense” and completely irrelevant from a “social sense.” The entire thing about intelligence is that it is far more important for people to believe someone to be intelligent than it is for them to be intelligent.

    The problem with Searle’s Chinese Box thought experiment is that it assumes that there is a single, universal truth and that everyone can see inside the box. Similiar thing here. To some degree it doesn’t matter if the machine really “knows” that it has identified a “cat” in a self reflexive way. What matters more, in day to day function, is if the people who interact with it think that the computer “knows” that its identified a “cat.”