The question that drives all artificial intelligence has been discussed, debated, and disputed for an eon. This article will discuss some famous experiments regarding computer intelligence. It will differentiate between human intelligence and artificial intelligence. It will also provide some explanation regarding what consciousness is and if it is attainable by machines.

The intangible idea regarding whether a computer or a machine can think has always perplexed the greatest scientific minds[4]. Yet, we don’t have any clear answer. Deep learning neural networks have already exceeded human performance in terms of object recognition and pattern matching[12]. But intelligence is not all about pattern recognition. It is more about understanding and modeling the world around us, problem-solving, creating new understanding as we learn more about the world[11]. In this article, we’ll briefly explore the idea of whether computers have the ability to think and if they can think like a human.

The Turing Test

In the early 1950s, Turing disregarded all complex questions regarding computer intelligence in favor of a straightforward one. Can a computer communicate like humans? This question led to the famous “Turing Test”. Turing proposed a game in “Computing Machinery and Intelligence”[19]. In this game, a human judge will have a text conversation with players, including humans and computers he cannot see and evaluate their responses. A computer will be considered intelligent if the judge can’t easily distinguish its conversations from other humans. Since then, the Turing test had become a standard for measuring computer intelligence. Very few have done well in this test, those who did well rather found a clever way to fool the judges. First of such endeavors were made by ELIZZA[21], a chatterbot. With some short and simple script, it managed to mislead many people by mimicking a psychologist. Parry[5], another chatbot, tried the opposite way to mimic paranoid schizophrenia.

The Chinese Room Argument

This experiment was proposed by Searle in “Mind, Brains and Programs” [17] that went against the validity of the Turing test. He argued that a program might appear to be intelligent and understand a language, but it does not give the program real understanding, consciousness, or thinking capability. Chatbots like ELIZA and Parry may pass the Turing test by manipulating keywords and symbols of a language, but they don’t understand the language. So we cannot compare these with the thinking capability of humans. According to Searle, the Turing test itself can not be the sole way of judging intelligence, as programs can easily cheat and pass the test without actually being intelligent at all[9].

Human Intelligence Vs Artificial Intelligence

In 1956, scientists proposed that a “two month, a ten-man study of AI” would unlock the secrets of the human mind. [16], And soon we can precisely describe all the aspect of intelligence in a way that we can replicate the intelligence in a machine. But more than 65 years later, computers are still struggling 1 to replicate the basic cognitive functions of a human baby. On the other hand, we have already designed AI systems like DeepBlue[2] and AlphaGO2 has outperformed human[8]. These performances apply to a very narrow domain and only as good as the training data. Where AI falls short is when it needs to think abstract, apply common sense reasoning[15] [6] or apply knowledge learned from one domain in another. [13] [1]

Recently, in the 2016 paper “Building Machines That Learn and Think Like People” [11] authors have explored the possibilities of designing a computer system that can learn and think like a human by reverse engineering how the human brain works and learn. They suggested rather than just building an efficient narrow domain AI for specific tasks, deep learning and other machine learning procedures should try to solve the problem with a small amount of training data like people need to learn. Researchers should also evaluate it on a more human-like generalized range of tasks than the one specific task they trained it for.

Consciousness in AI

For biological organisms, consciousness is being aware of the environment and properly reacting to it for its benefit or to avoid harm. In the paper “From Biological Consciousness to Machine Consciousness” [22] authors have explained how there are several levels of consciousness in living organisms and how we can use the
same principle to make smarter machines[10]. Organisms sense the environment through organs like eyes, ears, and machines can do the same with sensors. Organisms react to the environment by making complex moves using muscles, and machines can accomplish the same through several motors and other parts. Based on the definition of consciousness, we can argue that a light-sensing lamp-post that turns on or off based on natural light can be counted as the simplest form of conscious machine. However, creating a more advanced and complex consciousness level like humans is far out of reach for modern AI systems.

Artificial General Intelligence (AGI)

Figure 1: Artificial Super General Intelligence (ASGI) surpassing human intelligence sometimes in future

Although debated, many argue that sooner or later, our current artificial intelligence will evolve into complex consciousness and thus giving birth to what is termed as Artificial General Intelligence (AGI) 3 [7], a state where computers will have equal level intelligence of an average human [14]. At this level, a computer is smart enough to think and improve upon itself. There is a chance that after this state, it will keep improving itself in a run-away fashion without any human intervention and will reach a level called “Artificial Super General Intelligence” (ASGI) (Figure 1) [18]. This event of intelligence explosion will be called Singularity[3] [20]. Beyond this, computers will surpass the intelligence of the greatest intellectual human minds combined.

In this article, we have discussed the conundrum of computer intelligence. Starting from the Turing test, its experiments, and counter arguments. We have explored human and computer thinking and how computers have beaten human performance but can’t think like humans yet. We have tried to understand what machine consciousness is and compared it with biological consciousness. Finally, we have discussed how AI’s complex consciousness can lead to Singularity by creating Artificial General Intelligence (AGI) and then Artificial Super General Intelligence(ASGI).

Our current computing systems may outperform humans in some cases but may not be smart enough to think like a human yet. But with continuous improvement, there is a chance someday in future its thinking capability may go well beyond the combined intelligence of entire human race.




  1. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense Transformers for Automatic Knowledge Graph Construction. arXiv:1906.05317 [cs.CL]
  2. Murray Campbell, A.Joseph Hoane, and Feng hsiung Hsu. 2002. Deep Blue. Artificial Intelligence 134, 1 (2002), 57–83. 00129-1
  3. David J. Chalmers. 2010. The Singularity: A Philosophical Analysis. Journal of Consciousness Studies 17, 9-10 (2010), 9–10.
  4. Paul M. Churchland and Patricia Smith Churchland. 1990. Could a Machine Think? Scientific American 262, 1 (2021/03/15/ 1990), 32–39. Full publication date: JANUARY 1990.
  5. Kenneth Mark Colby. 1975. Artificial Paranoia – A Computer Simulation of Paranoid Processes (1st edition ed.).
  6. Ernest Davis and Gary Marcus. 2015. Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence. Commun. ACM 58, 9 (Aug. 2015), 92–103.
  7. Ben Goertzel. 2014. Artificial General Intelligence: Concept, State of the Art, and Future Prospects. Journal of Artificial General Intelligence 5, 1 (2014), 1–48.
  8. Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. When Will AI Exceed Human Performance? Evidence from AI Experts. arXiv:1705.08807 [cs.AI]
  9. Jason Hutchens. 1997. How to Pass the Turing Test by Cheating. Technical Report.
  10. Patrick Krauss and Andreas Maier. 2020. Will We Ever Have Conscious Machines? Frontiers in Computational Neuroscience 14 (2020), 116.
  11. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2016. Building Machines That Learn and Think Like People. arXiv:1604.00289 [cs.AI]
  12. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (01 May 2015), 436–444.
  13. C. Mason. 2010. The Logical Road to Human Level AI Leads to a Dead End. In 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems Workshop. 312–316.
  14. John McCarthy. 2007. From here to human-level AI. Artificial Intelligence 171, 18 (2007), 1174–1182. Special Review Issue.
  15. Marvin Minsky. 1990. Why People Think Computers Can’t. Oxford University Press, Inc., USA, 145–166.
  16. James Moor. 2006. The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Magazine 27, 4 (Dec. 2006), 87.
  17. John R. Searle. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3, 3 (1980), 417–424.
  18. Camilo Miguel Signorelli. 2018. Can Computers Become Conscious and Overcome Humans? Frontiers in Robotics and AI 5 (2018), 121.
  19. A. M. TURING. 1950. I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX, 236 (10 1950), 433–460. arXiv:
  20. Pei Wang, Kai Liu, and Quinn Dougherty. 2018. Conceptions of Artificial Intelligence and Singularity. Information 9, 4 (2018).
  21. Joseph Weizenbaum. 1983. ELIZA — a Computer Program for the Study of Natural Language Communication between Man and Machine. Commun. ACM 26, 1 (Jan.1983), 23–28.
  22. Xue-Yan Zhang and Chang-Le Zhou. 2013. From Biological Consciousness to Machine Consciousness: An Approach to Make Smarter Machines. Int. J. Autom. Comput. 10, 6 (Dec. 2013), 498–505.