Computers may need patterns to think better

beggs. Flickr
beggs. Flickr

Patterns needed to help computers think better have been investigated by an international research group including a Charles Sturt University (CSU) expert, with the results reported in the latest issue of the international journal Nature Scientific Reports. 

The results come from ten years of collaboration between the Director of CSU’s Centre for Research in Complex Systems, Professor Terry Bossomaier, and colleagues at the University of Sydney lead by Professor Allen Snyder. “We want to understand how shifts in paradigm occur in human thinking. These shifts occur in individuals when they reach their performance in such areas as mathematics or finance. In societies they occur as knowledge grows and attitudes change,” Professor Bossomaier said. “One challenge we faced was to find ways of measuring these shifts. We decided to use the ancient oriental game of Go and study how experts in Go use patterns to remember strategies for the game, and how these might be simulated in a computer program. 

“Computers are currently quite good at face recognition, but voice and speech processing still have some way to go. In areas of what we call ‘thinking’, particularly common sense, computers are still quite weak,” Professor Bossomaier said. “One big difference between human thinking and current computational intelligence is that we use a big library of patterns we build up over the years to give us a fast intuitive grasp of a situation. “The great cognitive scientist Herbert Simon, who won the Nobel Prize for Economics, recognized this as needing to build up chunks of little patterns, and needing at least 50 000 of these to reach expert level at anything. We now think it is more than 100,000 patterns.” The research project aimed to develop a deeper understanding of how these chunks are gained and how they change with experience. “We also wanted to capture decisions made in the real world, without the restrictive effects of being in an artificial experiment.

We decided to do this by capturing the moves made in high level games played online, such as Go,” Professor Bossomaier said. Chess was the domain of study for human expertise, but after the Deep Blue computer defeated then World Champion Gary Kasparov in 1997, interest has turned to other games of skill. “Go is as old as Chess and is played extensively in Asia, especially in Japan and Korea. Human players are still much better than computers. This is an excellent game to study to learn more about what humans do really well,” he said. The first phase of the research showed that people's knowledge undergoes dramatic reorganisation when they move from amateur to professional rank. “The change takes place not just in the areas of ‘deep strategy’, where one would expect the big gains to be, but also there is a radical reorganization at the perceptual level,” Professor Bossomaier said. “It's a bit like acquiring a good accent in a foreign language. At quite a young age, the sounds of one's first language get set and are very difficult to change later.

“The perceptual templates we found for Go are akin to the ‘phonemes’ or sounds of a language. But unlike language, we found that these low-level templates do change with many years of practice.” Professor Bossomaier believes this also has major implications for education. “Getting the building blocks right is the key to developing expertise. If we can find these blocks, the templates used by the superstars, we might be able to build them into the early training of professionals. “Computer games are also being increasingly used in education and training. Insights from studying the most difficult games such as Go can also be fed back into more serious games,” he said,


In psychology the coherent system of patterns is called frame of reference. Shifts of paradigm, and also learning in general, occur due to constant reflections upon our experiences.
And experiences are nothing more than the memory of the interpretations of sensory stimuli, interpretations based upon earlier experiences (frame of reference).
So any interpretation is the making of information, based upon earlier information...reflections.
The learning curve is actually a learning spiral, both in cognition and perception.
The growth of the frame of reference involves constant reorganisation; tens of thousands of daily stimuli on the one hand pull numerous 'reflection books' from the mind's library, and prove that these books to be true or false according to our own views.
It is a kind of optimalisation of perception: a slight overload during daytime causes a messy library, and this library gets cleaned up and reorganised during our moments of rest.
A developing brain grows in its ability to interpret stimuli, not only by increasing the experience/reflection level or the frame of reference, but also by conditioning how to interpret new stimuli in a general sense; the abstract of interpreting itself. We do not feel the experience in gaining experience, we only feel the material experience itself because only that information is tangible.
The abstract of interpreting involves a sense of proportion, a sense of quality and quantity, a sense of weight, and a sense of consequence. In fact exactly what programmers do in current AI: they formulate these things, and the computer generates results (numbers) in the correct proportions, quantity, etc.
But information entails something more for humans: we not only have rational parts, but also emotional and symbolic parts, and giving true meaning involves the art of feeling the sensory stimuli. Then it is a matter of developing sensitivity to stimuli, you can call this affect; it is the creative ground for new ideas.
A computer cannot do this, it cannot interpret, it can only go...go with numbers only, and numbers have no meaning except difference in quantity. We can make software that simulates interpretation by a program that produces and manipulates different numbers, and then we put some characteristics to these numbers.
You can make a large software framework or 'thinking space' for a computer, but also then it still goes with numbers only.
It cannot give meaning, meaning has to be put in by human interpretation (need input!) and therefore artificial intelligence in essence does not exist.

I also would like to say about Deep Blue, the chess computer.
I think it is not right that chess should be regarded as obsolete field of interest, just because Deep Blue defeated Kasparov and hence should be regarded as superior.
To the contrary.
The thing we must understand is that chess heavily relies on calculations and mathematical principles, and these are the strong points of computers (CPU).
Besides those we have the abstract parts like strategy.
Now, computer chess combines this abstract knowledge with calculation power, and it is only this calculation power that determined Kasparov's defeat.
Chess programs nowadays are a lot stronger than Deep Blue, and are programmed by chess masters (or lower!), not chess grandmasters.
The point is that the abstract knowledge is defined and put in place by the chess master, and the sheer calculation power of the computer acts like a power engine driving this abstract knowledge.
So, the abstract knowledge has to come from humans in the first place - the computer is dumb and knows nothing but going with numbers - and this abstract knowledge is perhaps a bit less than the abstract knowledge Kasparov has got himself.
Then the 'superiority' can only depend on calculating power...well I cannot compete with a calculator if the answer to (4637.528)^4 has to be found...does that make a calculator superior to human intelligence?
No, the way a computer attacks the chess problem is completely different and not comparable with human thought.
This makes the comparison invalid and that is why I don't understand why chess should be regarded as obsolete in this respect.