Pentti O A Haikonen


Only the Impossible Is Difficult Enough




Dr. Pentti O A Haikonen


Adjunct Professor


Department of Philosophy

                 University of Illinois at Springfield




Machine Cognition – Robot Brains – Machine Emotions – Qualia –

 Conscious Machines – Artificial Consciousness – Philosophy



My Youtube videos can be seen here:


Serious work: Demo videos of my cognitive robot XCR-1, a demo of my quasi-quantum computer that is able to find factors of given binary numbers instantly, and an idea what was before the big bang.


Entertainment: Some season's greetings videos.


Facebook & others: Many people have invited me to join them in Facebook, Linkedin etc. My sincere thanks to those people. However, it is my personal policy not to be in Facebook or other web societies. If you wish to communicate with me, please contact me directly via email.





KIRJOJA SUOMEKSI (books in Finnish):


Pentti Haikonen: Hämeenlinnasta maailmalle. Muistelmat osa I. (2014) (Hauskoja muistoja mm. Hämeenlinnan lyseosta. Kuinka iskelmälaulaja Hyttinen koulutti äidinkielen lehtoria, paha! Muitakin julkisuudesta tuttuja nimiä)


Pentti Haikonen: Outoja juttuja. Viihdekertomuskokoelma (2014) (25 kieroa juttua, joiden loppuja et arvaa. On kehuttu.)


Saatavana kirjakaupoista tilaamalla. Saa myös suoraan minulta edullisesti. Laita mailia ja kysy.




Books and book chapters:


NEW! Now available from Amazon and other sellers



Pentti O Haikonen: Consciousness and Robot Sentience

World Scientific, 2012












Pentti O Haikonen: Robot Brains; circuits and systems for conscious machines.

Wiley and Sons, UK 2007

From associative neurons and neuron groups to perception circuits, cognitive architectures, machine emotions, natural language in machines, machine consciousness

Contains a little bit mathematics and circuit diagrams. Get your soldering iron ready or write some code!


It is useful to read the old book first.






The good old one:




Pentti O Haikonen: The Cognitive Approach to Conscious Machines. Imprint Academic, UK 2003


My background philosophy towards the design of conscious machines – Easy reading, no mathematics here, lots of ideas.

Based on cognitive sciences, engineer’s insights and common sense.




Pentti O Haikonen Videotekniikka 1992 - 1994 (In Finnish). Sold out.




Book Chapter in Visions of Mind, Darryl N Davis (editor), Idea Group Publishing, USA 2005






Journal and conference papers



  Haikonen Pentti O. (2014). Consciousness and Robot Sentience: A Response to My Reviewers. International Journal of Machine Consciousness (IJMC). Vol. 6,   No. 1 (2014) pp. 71–74. DOI: 10.1142/S1793843014400125

  Haikonen Pentti O. (2014). Yes and No: Match/Mismatch Function in Cognitive Robots. Cognitive Computation. Volume 6, Issue 2 (2014), pp. 158-163 DOI: 10.1007/s12559-013-9234-z

  Haikonen Pentti O. (2013). Consciousness and Sentient Robots. International Journal of Machine Consciousness (IJMC), Vol. 5, No. 1 (2013) pp. 11–26.  DOI: 10.1142/S1793843013400027

Haikonen Pentti O. A. (2012). Consciousness and the Quest for Sentient Robots. Biologically Inspired Cognitive Architectures 2012 Advances in Intelligent Systems and Computing, Volume 196, 19-27, DOI: 10.1007/978-3-642-34274-5_4

Haikonen Pentti O. (2011) Open Questions on Shanahan’s Workplace. International Journal of Machine Consciousness (IJMC). Volume: 3, Issue: 2 (December 2011) pp. 339 – 341

Haikonen Pentti O. (2011) Too Much Unity: A Reply to Shanahan. APA Newsletter on Philosophy and Computers. Vol. 11, No 1 Fall 2011 pp. 19 20

Haikonen Pentti O. (2011) Flawed Workspaces? APA Newsletter on Philosophy and Computers. Vol. 10, No 2 Spring 2011 pp. 2 4

Haikonen Pentti O. (2011) XCR-1: An Experimental Cognitive Robot Based on an Associative Neural Architecture. Cognitive Computation: Volume 3, Issue 2 (2011), pp 360-366

Haikonen Pentti O. (2010) An Experimental Cognitive Robot. In A. V. Samsonovich et al. (Eds.) Biologically Inspired Cognitive Architectures 2010. IOS Press Amsterdam. pp. 52 – 57

Haikonen Pentti O. (2010) Quasi-Quantum Computing in the Brain? Cognitive Computation. Volume 2. Volume 2. No 2. pp 63 – 67

Haikonen Pentti O. (2009) Conscious Perception Missing. A Reply to Franklin, Baars, and Ramamurthy. APA Newsletter on Philosophy and Computers. Vol. 9, No 1 Fall 2009 p. 15

Haikonen Pentti O. (2009) Slippery Steps Towards Phenomenally Conscious Robots. APA Newsletter on Philosophy and Computers. Vol. 8, No 2 Spring 2009 p. 4

Haikonen Pentti O. (2009) The Challenges for Implementable Theories of Mind.

Journal of Mind Theory Vol. 0, No1. pp 99 – 110

Haikonen Pentti O. (2009) Machine Consciousness: New Opportunities for Information Technology Industry. International Journal of Machine Consciousness (IJMC). Volume: 1, Issue: 2 (December 2009) pp. 181-184

Haikonen Pentti O. (2009) Qualia and Conscious Machines. International Journal of Machine Consciousness (IJMC). Volume: 1, Issue: 2 (December 2009) pp. 225 – 234

Haikonen Pentti O. (2009) Tekoälyn olemassaolo ja tietoisuus (The existence of Artificial Intelligence and Consciousness) Niin & Näin 3/2009 (in Finnish, publisher: The Society for European Philosophy) pp. 45 - 47

Haikonen Pentti O. (2009) The Role of Associative Processing in Cognitive Computing. Cognitive Computation. Volume 1, Number 1 / March, 2009

Haikonen Pentti O. (2008) Towards "Natural" Natural Language in Machine Cognition.

Haikonen Pentti O. (2007) Essential Issues of Conscious Machines. In Journal of Consciousness Studies, Volume 14, No. 7, pp. 72 – 84

Haikonen Pentti O. (2007) Reflections of Consciousness: The Mirror Test. In Papers from the AAAI Fall Symposium, Technical Report FS-07-01 pp. 67 – 71

Haikonen Pentti O. (2006) Towards Streams of Consciousness; Implementing Inner Speech. In T. Kovacs and J. Marshall (Eds.), Proceedings of the AISB06 Symposium, vol. 1. pp. 144 – 149. The Society for the study of Artificial Intelligence and the simulation of behaviour, UK.

Haikonen Pentti O. (2005) Artificial Minds and Conscious Machines. In: Darryl N. Davis (ed.) Visions of Mind. USA: Information Science Publishing. pp. 254 – 274

Haikonen Pentti O. (2005) You Only Live Twice: Imagination in Conscious Machines. In R. Chrisley, R. W. Clowes & S. Torrance (Eds.), Proceedings of the AISB05 Symposium on Next Generation approaches to Machine Consciousness: Imagination, Development, Inter-subjectivity, and Embodiment. pp. 19 – 25. The Society for the study of Artificial Intelligence and the simulation of behaviour, UK.

Haikonen Pentti O. (2002) Emotional Significance in Machine Perception and Cognition, IASTED International Conference on Artificial Intelligence and Applications (AIA 2002) September 9-12, 2002 Málaga, Spain

Haikonen Pentti O. (2000) A Modular Neural System for Machine Cognition, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks IJCNN 2000 Como 24. – 27. July 2000, 47 – 50.

Haikonen Pentti O. (2000) An Artificial Mind via Cognitive Modular Neural Architecture,  Proceedings of the AISB'00 Symposium on How to Design a Functioning Mind, University of Birmingham 17. – 20. April 2000, 85 – 92.

Haikonen Pentti O. (1998) An Associative Neural Model for a Cognitive System,  Proceedings of International ICSC/IFAC Symposium on Neural Computation NC'98 Vienna 23 - 25 September 1998 pp. 983 - 988

Haikonen Pentti O. (1998) Machine Cognition via Associative Neural Networks, Proceedings of Engineering Applications of Neural Networks EANN'98 Gibraltar 10 - 12 June 1998 pp. 350 – 357

Haikonen Pentti O. (1998) Assessor, a machine with functional consciousness, Toward a Science of Consciousness 1998 "Tucson III" April 27 - May 2, 1998, The University of Arizona, Tucson, Arizona

Haikonen Pentti O. (1994) A Novel Neuron for Associative Neural Networks, Conference on Artificial Intelligence Research in Finland STep-94, Proceedings of Contributed Session Papers pp. 9 - 12, 1994 Finnish Artificial Intelligence Society, Helsinki

Haikonen Pentti O. (1994) Towards Associative Non-algorithmic Neural Networks, Proceedings of IEEE International Conference on Neural Networks ICNN'94 Vol II pp. 746 – 750 Orlando, June 28 - July 2, 1994

Haikonen Pentti O. (1993) Basic Requirements for Cognitive and Conscious Machines, Neural Network Research in Finland Symposium proceedings pp. 19 - 25, 1993 Finnish Artificial Intelligence Society, Helsinki




Useful links:


The pertinent issues of artificial consciousness (machine consciousness) explained:




Excellent site for conscious robots, good content, frequently updated:



The very nice site of Giorgio Marchetti about mind and consciousness:




Suomen Tekoälyseura – Finnish Artificial Intelligence Society





                 Mathematics: Why differential forms are good for the representation of the physical world? Terho Max Haikonen explains.

                      Notes on differential forms and spacetime:







Essays and other stuff






Five Easy Steps Towards Conscious Robots


Consider a robot with environmental sensors for the sensing of the environment and self-sensors that sense the states of the robot itself. This robot could be implemented with various degrees of complexity. Depending on this complexity the following cases can be distinguished:


1. Simple reflex. A robot detects an obstacle, backs off, turns a little and goes forward again until another obstacle is encountered. Would this be a conscious act? After all, the robot has detected the obstacle and reacted to it, so it must be “aware” of the obstacle? However, even toy robots can act like this and they are definitely not conscious.


2. Simple reflex with memory. The robot is able to react to obstacles and can record its history. For instance, the robot could learn to negotiate a maze by memorizing correct turns at each obstacle. Now the robot will be “aware” of the obstacles and the layout of the maze. Is it conscious or merely executing mechanically a string of operations?


3. Perception with meaning and associative memory. The robot perceives the world and can “learn” from experiences. The approaching obstacle evokes “images” of past encounters with similar obstacles and the consequences of the same. Some kinds of “pain” for dangerous objects and “pleasure” for useful objects are also involved. The evoked “imagery” and the “pain” trigger avoiding actions. In this case the perceived obstacle has a meaning to the robot –something to be avoided in certain ways; something that has “pain” associated with it. Perception is not simple recognition, instead it is an active process of searching and detecting opportunities and threats. Perceived entities are not recognized, instead they remind of things and evoke “imagery” of action, which may or may not be actually executed. The robot is looking for “satisfaction”. The focus of perception and action is controlled by needs and good/bad criteria; attention process.


4. Perception with associative memory and report. The robot can declare what it is perceiving and doing while doing it and afterwards. The robot can associate meaning with the declarations of its peers. The robot can also report its self-sensor percepts. This calls for the use of representational symbols and symbol systems; “words” and “language”.


5. The robot perceives itself perceiving. The robot can perceive its own declarations and understand them. At first this may happen via environment sensors, later on the declarations may be looped back to the sensory circuits internally. The robot is able to make the distinction between the percepts that are caused by the environment and those that are caused by internal processes. The perception of internally caused “images” and the associative evocation of further “inner images” lead to the ability to “imagine”. The flow of the “narrative inner images” and their related good/bad value evaluation would constitute a kind of “ mental content”. On the other hand the robot would have needs to satisfy and active as well as latent “imagery” of tasks to be executed. All these would constitute the “mind” of the robot. At this point the robot seemingly has most, if not all hallmarks of consciousness.


It should be noted that none of these steps involve the creation of “specific conscious circuits”. The steps 3 and 4 are crucial, but clearly within the realm of engineering and can be realized by proper application of associative memory and system architecture. The final step 5 does not actually involve any additional hardware, instead an additional way of operation is required; the inner feedback that enables the robot to perceive its mental content silently, without the need to act it out for the environmental sensors. It should also be noted that the resulting phenomenon that has the characteristics of “real consciousness” is not a circuit, it is not an observer, it is not a causal agent; instead it is a content level way of operation. It is the way, how the system perceives itself perceiving.




Summary of the Haikonen Model of Machine Consciousness


The Haikonen cognitive machine is characterized by: Distributed signal representation, associative processing and learning, perception process, sensory attention and inner attention, the flow of inner speech, inner imagery and the equivalent for other sensory modalities, evaluation of significance, basic system reactions, machine emotions, motivation, imagination, “mirror neuron” action. This machine utilizes an architecture that is characterized by: Sensory preprocessing circuits that derive distributed representations from sensors, very large number of introspective feedback loops that detect sensory information and broadcast it to other loops, associative cross-coupling of these loops, attention control via large number of variable thresholds. The key concepts are summarized here.


Reception.  The system receives signals from sensors.


Detection. The system discriminates received signals from noise and interfering signals (non-linear threshold mechanisms etc.)


Perception. The system augments signal detection by attention, experience, context and expectations (feedback mechanisms) A percept is the “official” result of perception process. Percepts have direct causal meaning: They depict the physical entity that caused them.


Introspection. The system can only acquire information via sensory circuits. Therefore it is not inherently aware of the inner thinking processes. There is an exception, though; certain match/mismatch conditions must be detected at the neural level and this information must be made available without actual sensors. The winning results of the actual thinking processes may manifest themselves as motor actions, which in turn may be perceived via sensors. Thus the system may become aware of the products of its thinking processes via external feedback. This is awkward and therefore a method that enables the system to introspect its mental content internally is required. This can be achieved by internal feedback that returns the mental content into the equivalents of sensory percepts like this: Auditory percepts; inner speech, inner music. Visual percepts; inner imagery. Body sensors; imagined movements. Inner sensors; emotional states. These inner "representations" are not independent of each other; inner speech is related to inner situational representations etc.


Attention. Only a small number of entities may be actively processed at a time. The selective process is called attention. Sensed entities are selected by sensory attention, mental entities are selected by inner attention.


Affordance. A percept that evokes possibilities for use and action; this necessitates the activation of cross-connections between various modalities.


Cognition. The association of auxiliary meanings with percepts, the use of percepts as symbols, the manipulation of these, reasoning, response generation, language.


Imitation. Imitation is the ability to reproduce seen action or heard sounds. This is achieved via “mirror neuron action”; sensory signals activate proper motor neurons.


Imagination. Imagination is the forming and manipulation of conscious mental representations of actions and entities, which are not sensorily present. Actions and their consequences may be imagined and evaluated. Imagination and imitation utilize same motor neuron connections; in imagined “mirror neuron action” the evoking representations have inner cause instead of sensory origin.


Learning. The acquisition of mental entities and connections between these. Motor routines included.


Emotions. Emotions are seen as combinations of basic system reactions (accept, reject, approach, withdraw ...) triggered by emotional criteria (good, bad, pleasant, painful etc). Emotions operate as attention control, motivation, short-cut templates for style of action. Emotions affect learning. The Haikonen systems reactions theory of emotions is used.


Language. Language is seen as a description method for external and internal states of affairs. Perceived situations may be translated into linguistic description, linguistic descriptions may be used instead of direct sensory percepts. Inner speech is the system’s internal interactive narration. The Haikonen multimodal model of language is used.


Consciousness. The content of consciousness is created by perception. The awareness of the environment is created by the perception of environment. Self-consciousness is created by the perception of body and its processes and the perception of mental content (introspection). There are no special circuits or locations that would turn entering signals into conscious ones. Each circuit operates basically in the same way whether the overall operation is “conscious” or “non-conscious”. The contents become “conscious” when the various circuits operate in unison, focus attention on the same entity as perceived by the various sensory modalities. This involves a wealth of cross-connections and subsequently the forming of associative memories. Therefore the “conscious” event can be remembered for a while and can be responded to and reported in the terms of the various modalities such as sounds, words, gestures, drawings, written text, etc. A conscious percept appears as an affordance.


Phenomenality. It is proposed that the subjective experience “the feel of being conscious” is produced by the way of operation of the perception-centered system; “the system perceiving itself perceiving”. This subjective experience is “apparently immaterial”, which in turn is the direct consequence of the absence of any inner observing core. The material innards are not inspected for the detection of the inner states, instead these states connect directly to other states and cause the evocation of responses. Therefore the material nature of the system is not observed and does not enter into the mental content. The machinery remains transparent and the subjective experience would therefore be a content-level phenomenon (like information carried by modulation) that is related to the dynamics of attention and the flow of inner representations.



Robots Just Want to Have Some Fun



Why can't we explain consciousness? I think that this is because we have approached the problem from a wrong direction.


True machine consciousness and robots with conscious minds would seem to be beyond our reach as long as we cannot answer to the basic questions about human consciousness; what is it, is it really an immaterial entity, what is this phenomenal part of it, what exactly is the feel of pain or pleasure. Until we know this there is little hope to reproduce consciousness in machines in any plausible way.


Our thoughts and conscious mind seem to be immaterial. Our everyday experience seems to prove this beyond any suspicion; we cannot perceive any material processes taking place when we think. We can see and perceive things and actions out there directly as such, without any apparent material medium. Likewise we can hear sounds coming from our environment, again as such. Moreover, our apparently immaterial percepts have phenomenal qualities. We can feel the heat of the sun, the wetness of the rain, we can feel pain and pleasure. The apparent immaterial nature of all this has so far prevented plausible explanations of consciousness.


How can anything immaterial arise from the material brain? Many contemporary theories of consciousness try to equate the processes of the mind to biological brain processes, to patterns of electrical and chemical neural activity. There has been some success there, as today brain activity can be monitored by various methods like magnetic resonance imaging and positron emission tomography. Also certain neural transmitter chemicals like endorphins have been associated with the alleviation of pain and the generation of pleasure. However, in this way no answer may necessarily emerge to the question why the mind appears to be immaterial, why the mental entities are about something and have phenomenal flavors, subjective feel.


Why can't we explain consciousness? I think that this is because we have approached the problem from a wrong direction. Instead of asking what this immaterial consciousness could be we should ask why do we perceive the mind as immaterial in the first place. A successful answer to this question will evaporate all the other problems and make them redundant.


Our knowledge about the physical world comes from our senses; seeing, hearing, smell, taste, touch, etc. Our explanation of consciousness must begin with this; the way in which the brain represents external information to itself. Indeed, researchers like Prof. Igor Aleksander and others have seen the ability to create suitable inner representations from sensory information as an essential prerequisite for consciousness. But, are these representations immaterial depictions of the actual entities? If they were not, then surely we would see some material carrier for them, like some "neural blackboard" or "theater" for our inner eye to observe, but we do not.


It is known that the receptors of the senses generate neural signals in response to their stimulation; these and nothing else are forwarded to the brain. For instance, each photoreceptor on the retina generates a neural signal that corresponds to the illumination of that receptor. However, our visual experience does not consist of odd collections of retinal stimuli or the corresponding neural signal patterns, instead we appear to see actual visual objects out there, without the awareness of any related neural processes. Somehow these neural signals seem to be able to convey the information content only while remaining transparent in themselves and consequently our percepts and thoughts become to be about real word entities instead of the neural signal patterns that actually carry them. This effect, I think, is the key to the essence of our conscious mind. Surely the brain must be performing a complicated, perhaps even supernatural trick here, how else could this be explained?


In fact, this trick is not a complicated one. We can consider a simple experiment that illustrates this point. What would we feel if we scanned a rough surface with a rigid stick? It so happens that we would not perceive the vibrations of the stick as such, instead we would perceive the groove patterns of the surface. There are no nerve fibers going through the stick and into our brain, therefore common sense would say that we could only feel the vibrations of the stick against our fingers. This, of course, is what happens, but these vibrations are caused by and contain information about the actual roughness of the surface and this is what we perceive. The rigid stick remains transparent, not because of any complicated trick, but by the sole virtue of its rigidity, the ability to preserve vibration information. In a similar fashion the neural signals from the senses are "rigid". They are able to convey sensory information in a transparent way. This sensory information is carried like modulation on a radio wave; it is the music that we hear not the carrier wave. However, the carrier is necessary; neurons are needed to carry and switch neural signals even though we cannot perceive these without external means. The "phenomenal" information is carried by the neural signals, the system will operate on this information only and not on the physical nature of the carrying medium. Therefore our thoughts are about something instead of being mere neural firings. Thinking and reasoning are thus based on the interaction of the carried information content, the modulation patterns of the neural signals. The brain as a higher level system does not have to be able to perceive its material basis; the actual nature of the carrier medium and hardware does not enter into the logic of the thought flow.


To understand radio programming we must investigate the contents of the modulation, the meaning of the transmitted program. The mere inspection of the radio circuitry will not do; the meaning is on a higher level. In order to understand speech we do not have to study air molecules. Again it is the modulation of the sound that matters even though without air molecules there would be no sound. In a similar way the mental content arises above the actual physical machinery of the brain. The system will perceive the apparently immaterial mental content only and may therefore arrive at the naïve conclusion of the immateriality of the mind.


The concepts of modulation and circuit transparency are well understood in electronics engineering. Consequently the consideration of mental entities as modulation patterns carried by neural signals gives an acceptable explanation to the problem of the apparent immateriality and aboutness of these entities. However, the strong phenomenal properties like pain and pleasure cannot be readily explained in this way.


What is pain? When we hurt ourselves, a neural signal is transmitted from the affected pain receptor to the brain and we feel pain. However, this neural pain signal is actually similar to the other neural signals in the brain. Why then would this signal be felt as painful when the other similar neural signals like those originating from the eyes or ears do not very much feel like anything? What is the specific feel of pain?


The meanings of sensory neural signals are causally grounded to the outside world, to the properties of the sensed entities. However, the feel of pain is not grounded in this way to sensed entities because pain is not a property of a sensed entity. Pain receptors do not sense pain, they simply sense cell damage and the caused signal indicates only that pain is to be evoked. The pain signals themselves do not carry the feel of pain, instead the feel arises from the effects that these signals have on the system and this in turn depends on the way how the signals are connected to the system. Thus the feel of pain is not a representation, instead it is a system reaction. However, we can very well label pain and describe it verbally, we can associate a linguistic representations with it.


What kinds of system reactions would feel like something to the system? Here we must consider one cognitive mechanism that according to researchers like Prof. John G. Taylor and others is closely related to consciousness, namely attention. The external world offers numerous stimuli, but in order to respond properly we have to focus our sensory attention on the most pertinent set of stimuli at each moment. Likewise we must focus our inner attention so that a coherent train of thoughts can arise. Pain and pleasure affect attention strongly. Pain demands attention; it disrupts any attention that is focused on any on-going task. Obviously pain signals are transmitted to every part of the frontal cortex where they try to stop whatever is going on so that something else that might stop the pain could be initiated. This global disruption of attention is necessary as the pain signal itself does not know what should be done to stop the damage and therefore it has to broadcast its message to everywhere and disrupt the attended processes there. It is exactly this general broadcasting that makes us moan and writhe when in pain. I consider this disruptive broadcasting as a fundamental property of pain and I would dare to go to as far as to propose that the subjective feel of pain is indeed caused by this attention disruption. Thus, if you were the system this disruption in its various forms would be what you would report as pain. This proposed link between attention and pain would also explain why pain may be alleviated by focussing attention heavily on unrelated matters and suppressing the disruption in this way.


Pleasure's effect on attention is different. Pleasure signals indicate good conditions that should be sustained. Therefore no shift of attention is required or desired, instead attention will be more and more focussed on the activity that produces the pleasure signals. This would also suppress the initiation of any alternative actions.


There is nothing mystical in the apparent immateriality of our minds. The mind appears immaterial as it operates directly on the carried information and cannot perceive the material carrier basis. Each piece of information appears different from the others as the causally grounded meaning is different. The phenomenal pain and pleasure have their specific feel because they are system reactions, not representations. Once we understand this we can begin to consider how the mind works as a system; how the mind utilizes inner speech and imagery, how emotions arise and values emerge, what motivates action.


This approach to consciousness opens up the way for the design of sentient robots with apparently immaterial minds; robots that have the flow of inner speech, inner imagery, sensations and emotions; robots that feel pain and pleasure; robots that just want to have some fun.