Navigation
  • Print
  • Share
  • Copy Url
  • Breadcrumb

    Redefining Artificial Intelligence

    To examine of how chess as an experimental technology influenced the development and perceptions of artificial intelligence.

    In the 1997 chess match between then reigning world champion, Garry Kasparov, and IBM’s supercomputer, Deep Blue, the machine gained a decisive victory after six intense games. This was the first time a computer was able to defeat the human world champion in an official tournament setting and as a result, was widely publicised. Journalists and academics were polarised—while some believed that artificially intelligent machines have tremendous potential in the imminent future, others regarded it as an indication that a similar trend of domination would occur outside of the game board. This underlying conflict of beliefs motivates the examination of how chess as an experimental technology influenced the development and perceptions of artificial intelligence. Because Deep Blue was one of the first tangible exposures of artificial intelligence, this match paved the way for a multitude of scholarly contention. Ultimately, this essay argues that while chess, adopted into artificial intelligence for its desirable historical and cultural values, increased scientific enthusiasm and attained greater social acceptance, it was also a limiting factor that restricted research. Social acceptance of emerging technologies is often vital for their successful integration into day-to-day life; thus this investigation has far-reaching significance, potentially implicating the future of artificial intelligence development and its role in society.


    An Inspiration for Academic Interest

    The implementation of chess as an experimental technology was no coincidence, it was preconditioned by social and historical associations that already existed within the computing community. Despite involving simple rules working within the confines of 64 squares, the game of chess has roots reaching as far back as one and half millennia ago, challenging countless generations of players and becoming a symbolic representation of intellectual prowess (Seirawan, 1997, p.22). Mastering chess is indeed a testament to the strength of many mental faculties: calculation, concentration, logic, pattern recognition, rational thinking, and information retention (Seirawan, 1997, p.22). Due to these properties, chess gained a “unique and idiosyncratic historical association with mathematics and computing” (Ensmenger, 2012, p.18). Many influential mathematicians and computer scientists like Alan Turing and Claude Shannon were also avid players of chess (Barbierato & Zamponi, 2022, p.336; Ensmenger, 2012, p.18; Prost, 2012, p.3). In the mid-20th century, companies in the computer industry like IBM even emphasised chess-playing ability as part of the hiring process, believing that there existed a direct correlation between chess-playing ability and programming ability (Barbierato & Zamponi, 2022, p.336; Ensmenger, 2012, p.18). This social rhetoric that chess mastery implies computational intelligence indirectly perpetuated a high concentration of players among computer programmers and engineers. Capitalising on this familiarity and passion, chess could be shifted into new forms to give “the bleak and the blear side of [AI] sudden luminosity” (McLuhan, 1964, p.267 as cited in Bory, 2019, p.629). The adoption of chess as an experimental technology was therefore necessitated by pre-existing associations and preferences.

    Another advantage over rival research agendas was that Chess also possessed a well-developed community and culture that provided scientists with accessible information for their research. While chess certainly was not the only game that rigorously tested intellectual faculties, the alternative candidates like checkers, Nim, and Go had smaller player bases and were only played in specific geographical regions (Ensmenger, 2012, pp.17-18). Chess’ overwhelming popularity meant that it was more accessible and standardised in nature. For example, the ample theoretical literature that already existed for chess conformed to “a comprehensive system of symbolic notation”, so that researchers could input their systems with reliable data to “validate its performance against a wide variety of opponents and situations” (Ensmenger, 2012, p.18). Common openings, early sequences, and endgame moves can be stored as pre-installed solutions for certain scenarios, while a database of historical games can also be used to evaluate the effectiveness of modifications to algorithms (Ensmenger, 2012, p.18). Moreover, the World Chess Federation adopted the Elo rating system in 1970, which was a fairer and more efficient method to calculate the relative skills of the players than its predecessor (Ensmenger, 2012, pp.18-19; Prost, 2012, p.5). The introduction of this numerical benchmark provided researchers with a way to measure performance so that computer chess programs could make continual and tangible progress (Barbierato & Zamponi, 2022, p.338; Ensmenger, 2012, p.19). Thus, the implementation of chess as an experimental technology produced a well-defined problem domain guided by unambiguous rules focusing on clear objectives and measures of success.

    As a result of the aforementioned virtues, chess as an experimental technology inspired great enthusiasm and motivation among scientists toward the development of artificial intelligence. In 1965, Alexander Kronrod—a Russian mathematician and pioneer of computer chess—prophetically claimed that “Chess was the drosophila of artificial intelligence” during a time when the term artificial intelligence was even barely understood (Ensmenger, 2012, pp.5-6). In the context of genetic biology, the drosophila triumphed as an accessible, familiar, and “controlled microcosm in which to develop the more sophisticated techniques to solve more difficult and significant problems” (Ensmenger, 2012, p.6). Similarly, chess was the preferred experimental technology because it was simultaneously simple enough to be mathematically illustrated and also challenging enough to produce important theoretical and algorithmic insights (Ensmenger, 2012, p.6; Prost, 2012, p.2). After Nobel-prize winning economist Herbert Simon’s discussion of the chess-drosophila analogy in 1973, the metaphor gained prominence and started appearing pervasively in scientific literature (Ensmenger, 2012, p.6). Within a few decades, computer chess has proven itself to be an immensely productive experimental technology, reflected by the abundance of academic papers and chess programs that were published and created (Ensmenger, 2012, p.6). It became apparent that the widespread integration of chess into artificial intelligence capitalised on pre-existing social and technical frameworks to transform this niche field into one that is rigorously studied and examined. The establishment of computational chess influenced the scientific community to perceive artificial intelligence as an interesting emerging technology worthy of investigation.

    The idea that chess provided a graspable substratum for more complex systems can be expounded upon by drawing on Knorr Cetina’s (1997) concept of reconfiguration. In Epistemic Cultures, Knorr Cetina (1997) describes how in a laboratory setting, scientists rarely examine subjects as they appear in nature and instead interact with and experiment on “visual, auditory, or electrical traces, and with their components, their extractions, and their ‘purified’ versions” (pp.26-27). This reconfiguration allows scientists to overcome three significant obstacles to their research: objects appearing in an undesirable state, anchored in an inconvenient location, and bound by a natural cycle of occurrence (Knorr Cetina, 1999, p.27). Essentially, Knorr Cetina (1997) regards laboratories as processes that entail “the detachment of objects from their natural environment and their installation in a new phenomenal field” for greater epistemic accessibility (p.27). In the same way, chess reconfigures many human cognitive and social activities into the confines of 64 squares. Chess simplifies the researcher’s objectives to make them more manageable and capable of producing substantial results (Prost, 2012, p.2). Moreover, Knorr Cetina (1997) also explains how reconfiguration integrates social values by “subjecting natural conditions to a ‘social overhaul’ and derive epistemic effects from the new situation” (p.28). Indeed, the next section will analyse how chess leads to unexpected consequences for artificial intelligence research—narrowing the field and causing deviation from its original aim to instead focus on social endeavours.


    Chess as a Limiting Factor

    While it is established that chess has drastically increased the accessibility of artificial intelligence as well as scientists’ enthusiasm, it has also led to unanticipated and perhaps unwanted long-term impacts. The choice of an experimental technology is not only dependent on practical factors such as abundance, availability, and familiarity, but it also takes into account what aspects of a discipline the particular technology reveals in relation to broader contexts (Ensmenger, 2012, pp.6-7). Just like how the establishment of drosophila as a primary experimental organism accentuated a particular research agenda—transmission genetics, the success of computer chess heavily emphasised deep-tree searching and minimax algorithms (Ensmenger, 2012, p.7; Hankey, 2021, p.65). In the mid to late 20th century, these techniques dominated many aspects of artificial intelligence research and even overshadowed other problem domains (Ensmenger, 2012, p.7). Where chess deviates from drosophila, however, is that while drosophila spearheaded leaps in our understanding and has far-reaching implications for the study of 20th-century biology, chess failed to produce any fundamental theoretical insights (Ensmenger, 2012, p.7). The brute-force computational techniques that were effective in gaming victories at chess tournaments eventually “distracted researchers from more generalizable and theoretically productive avenues of AI research” (Ensmenger, 2012, p.7). Therefore, the focus on tournament environment chess has, in certain ways, limited the scientists’ engagement with artificial intelligence.

    Indeed, this narrow-minded approach is reflected in the development of IBM’s Deep Blue supercomputer. Though acknowledging that Deep Blue was an incredible piece of computer technology, its developers were nonetheless too focused on bolstering the hardware calculation potential rather than revamping the algorithmic approach to increase efficiency. In the end, Deep Blue was comprised of 30 processors allowing it to compute 11,380,000,000 floating point operations per second (Ensmenger, 2012, p.22). Each processor then contained 16 customised chips designed specifically for chess evaluation, making the machine capable of calculating 200 million positions per second (Ensmenger, 2012, p.22). Yet, despite this impressive raw power, Deep Blue was rather incompetent at solving any real-world tasks (Hankey, 2021, p.62). The problem is that while Deep Blue’s minimax algorithms made effective moves, they were not made for the right reasons; the machine still lacked intuition and logic, the vital characteristics that differentiated how people and machines play the game (Ensmenger, 2012, p.23; Hankey, 2021, p.61). The purpose of chess as an experimental technology was to represent a “deliberate attempt to simulate human thought processes” (Ensmenger, 2012, p.23). Yet, although computer chess was effective in providing motivation and encouraging enthusiastic experimentation, it failed to deliver on the promise to provide a platform for exploring the “underlying mechanisms of human intelligence” (Ensmenger, 2012, p.23). Thus, though one can not deny that Deep Blue was an engineering marvel, its researchers’ focus on chess nonetheless had the undesirable consequence of bottle-necking the potential of artificially intelligent machines.

    This phenomenon can be interpreted with Espeland & Sauder’s (2007) idea of reactivity, which suggests that people subjected to evaluation and comparison are likely to change their behaviour and approach. In their research, Espeland & Sauder (2007) examined the effect of social measures by analysing two prominent mechanisms, “self-fulfilling prophecy” and “commensuration”, which induce reactivity (pp.11-12,16). A “self-fulfilling prophecy” is “a false definition of the situation evoking a new behavior which makes the originally false definition of the situation come true” (Espeland & Sauder, 2007, p.11). On the other hand, “commensuration” refers to the quantitative measurement and comparison of qualitative attributes through a common metric (Espeland & Sauder, 2007, p.16). Focusing on the case of media rankings of US law schools, the authors reveal how these mechanisms are capable of bringing out unintended and alarming consequences (Espeland & Sauder, 2007, p.2). The effect of “self-fulfilling prophecies” is made apparent by how schools react to prior rankings and adjust funding decisions within the university to conform to specific criteria (Espeland & Sauder, 2007, p.12). Meanwhile, the radical simplification of commensuration results in “decontextualized [and] depersonalized numbers” that are “more easily [circulated and] remembered than more complicated forms of information” (Espeland & Sauder, 2007, p.18). In effect, schools begin to pay less attention to students’ interests to instead focus more on conforming to societal expectations. This social phenomenon ultimately introduces inaccuracy in the attribute that measures are designed to evaluate and causes subjects to stray from their original goal.

    A similarly reactive pattern is reflected in the development process of computational chess. Computer chess and especially tournaments pitching human players against artificially intelligent machines began receiving massive media coverage in the 1990s and with it, grew the belief that a machine victory would be imminent. Realising that their work was receiving more publicity and social attention, the engineers working on computer chess systems were affected by the “self-fulfilling prophecy”. Researchers began diverting energy from producing useful theoretical data to instead focus on their pursuit of absolute glory—to defeat the reigning human chess champion, Garry Kasparov. Furthermore, the Elo rating system, as a commensuration method, was rather one-dimensional and neglected many important and unmeasurable aspects that indicate intellectual aptitude (Barbierato & Zamponi, 2022, p.336). Subsequently, the vision of exploring cognitive science and machine learning was abandoned and developers devoted to constructing stronger tournament-winning machines to climb the ranks (Ensmenger, 2012, p.7). Imaginative innovation was also overlooked as researchers went on a reductionist path defined almost solely in terms of tournament victories (Ensmenger, 2012, p.7). In this manner, Deep Blue’s development process was interwoven with a combination of “self-fulfilling prophecy” and “commensuration” that induced reactivity. Just like how Espeland & Sauder (2007) questioned the moral implications of universities gaming the statistics (p.36), computational chess similarly became a rather hollow and unfulfilling field that provided minimal contributions to artificial intelligence at large (Ensmenger, 2012, p.7).


    Social Reception and Wider Implications

    The influence of computer chess also rippled into the general public and decreased people’s concerns regarding the threat of artificial intelligence. The mid-20th century transition into the information age exposed the public to a great range of emerging technologies. With this, the long historical narrative of humanity versus technology began to resonate more and more (Bloomfield & Vurdubakis, 1997, p.32; Kasparov, 2017). This compelling rhetoric became commonplace in popular media, from novels describing our “race against the machines”, to movies depicting “a fight or even a war” against technology (Kasparov, 2017). Undeniably, people feared that their jobs would be taken, before they too are replaced by their artificial intelligence counterparts (Kasparov, 2017). Yet, counterintuitively, the victory of Deep Blue in 1997 did not further propagate this view, but rather, it illustrated to the public that machines still lacked vital human qualities such as intuition and reason (Hankey, 2021, p.65; Kasparov, 2017). Academics and reporters later argued that the machine’s victory was less revolutionary than it initially seemed (Bory, 2019, p.638; Hankey, 2021, pp.65-66). The Washington Post announced to the public that “the truth of the matter is that Deep Blue isn’t so smart. It does not for a moment function in the manner of a human brain,” instead, it is “unconscious, unaware and literally thoughtless” (Achenbach, 1997, as cited in Bory, 2019, p.638). This revelation subsequently calmed social tensions and made people more accepting of the integration of artificial intelligence.

    Additionally, the cultural and creative value of chess also helped to ease social perception toward artificial intelligence. The game of chess, carrying with it over a millennia of history, perfectly embodies an intellectual human endeavour. Not only does chess demand theoretical mastery, but it also requires certain social capabilities. The top players are said to “play the man as much as the board,” suggesting that understanding their opponent’s weaknesses, playing styles, and personalities will no doubt contribute to their success (Stanlaw, 1999, p.161). Hence, the ability of Deep Blue, as a machine, to participate in this fundamentally human activity implies deeper metaphysical significance. Consequently, chess represents “an extraordinary tool to promote, disseminate and symbolically integrate human values into emerging technologies” (Bory, 2019, p.639). The perception of artificial intelligence as an intrusive force of destruction subsequently faded with “the imaginative humanization of the machine” (Bory, 2019, p.638). For many, Deep Blue became regarded as a person that ultimately “entailed the ascription of subjectivity” (Woodward, 2013, p.183, as cited in Bory, 2019, p.638). Furthermore, chess also constructs a microcosm where the future relationship between humans and artificial intelligence can be imagined, tested, and experienced (Bloomfield & Vurdubakis, 1997, p.29; Bory, 2019, p.629). Despite the competitive setting, chess nonetheless creates an interactive environment between the two participants to explore this new socio-technical landscape (Bory, 2019, p.629). Therefore, computational chess depicts artificial intelligence as “harmless, cooperative and empathetic” to the public, infusing it with positive values and social trust (Bory, 2019, p.629).

    The match between Kasparov and Deep Blue also revealed that the respective strengths and shortcomings of man and machine are in fact complementary in nature, eliciting the possibility of a cooperative relationship. Kasparov and Deep Blue actually played each other for the first time in February of 1996, where Kasparov secured a four-to-two victory. The sixth game is particularly noteworthy: Kasparov utilised a specific tactical sequence that Deep Blue’s code simply couldn’t comprehend (Hankey, 2021, p.62). The strategy required a holistic evaluation of the chess board, which comes naturally to the human mind, but breaks down for computer programs as their algorithms assign values to specific pieces and objectives (Hankey, 2021, p.62). However, cooperation between people and machines could overcome this hurdle—people can utilise their intuition and reason to harness the extraordinary computation power of machines (Barbierato & Zamponi, 2022, p.342; Kasparov, 1997). The computer’s ability to accurately archive important chess data allowed players to thoroughly analyse historical matches and critical moves (Seirawan, 1997, p.22). Their impressive calculation speeds also presented new opportunities for people to evaluate the effectiveness of certain moves in order to point us toward the winning position (Seirawan, 1997, p.22). This vision inspired a new category of chess—advanced chess—where each human player is aided by a computer chess program (Kasparov, 2017; Seirawan, 1997, p.22). Machines effectively amplified human intellectual capacity, allowing a vast range of new techniques and playing styles to be rapidly invented (Kasparov, 2017). Evidently, computer chess was crucial for the integration of artificial intelligence into society, but more importantly, it opened new realms of possibilities for socio-technological advancements. Five years ago, Kasparov announced that “intelligent machines [will] turn our grandest dreams into reality” (Kasparov, 2017). As the proverbial man in the man vs machine competition, Kasparov’s insights on how we can cooperate with artificial intelligence powerfully illustrate a positive shift in perception.

    While the 1997 match between Garry Kasparov and Deep Blue was characterised by stiff competition that prompted reporters to dub it “the brain’s last stand”, long-term implications reveal that computational chess in fact encouraged progression for human strategy and intelligence. After the 1997 match, Kasparov maintained his revered position in the world of chess, even to this day (Seirawan, 1997, p.22). In fact, learning from his mistakes during his match with Deep Blue, Kasparov continued to develop his skills and dominate his rivals, before eventually attaining his peak chess rating of 2851 in 1999 (Seirawan, 1997, p.22). Nowadays, the public still look up to human champions rather than machines as an icon of intellectual accomplishment (Kasparov, 2017). The game of chess is also made more accessible than ever, attracting new players from every sector of society (Seirawan, 1997, p.22). This is testimony to how chess players will not simply stop playing because of the advancement of technology; future players will embrace the new techniques discovered through the aid of computers (Seirawan, 1997, p.22). As players persist to master the craft of chess, it becomes clear that under the positive influence of computer chess, artificial intelligence managed to successfully and harmoniously integrate into society as a tool for our own intellectual advancement.


    Conclusion

    The mid-20th century generally marked the beginning of the information age, a period broadly categorised by an epochal shift from traditional industry to an economy primarily based on information technology. The emergence of artificial intelligence during this time left many hesitant, believing it to be an existential threat to humanity. Chess, prevailing with its historical and cultural associations with intellectual prowess, was integrated into this field as an accessible and familiar platform to provide researchers with knowledge about more complex systems. Within a few decades, chess proved extremely productive in encouraging scholarly enthusiasm towards this relatively new field amongst the scientific community. Despite this, chess also narrowed the scope of artificial intelligence and distracted scientists from their original goals of studying cognitive behaviour, leading them to instead chase numerical metrics of tournament victories. The ensuing creation of Deep Blue and its publicised match against Kasparov also eased public concerns by depicting artificial intelligence as harmonious and cooperative. Ultimately, Deep Blue demonstrated that the relationship between people and machines is not so much a conflict, but rather a reluctant harmony that encourages mutual improvement.


    Acknowledgement

    I would like to thank my instructor, Dr. Kathryn E. McHarry, and my fellow classmates of NTW2028 for the amazingly meaningful and insightful discussions throughout the entire semester. Without whom and which, the completion of this work would not be possible. I thoroughly enjoyed the learning process of the NTW module and it has undoubtedly propelled both my academic and personal growth. Thank you once again, Prof, for your continued patience, guidance, and support.


    Bibliography

    Achenbach, J. (1997, May 10). In Chess Battle, Only the Human Has His Wits About Him. The Washington Post. Retrieved October 25, 2022, from http://www.washingtonpost.com/wp srv/tech/analysis/kasparov/post3.htm

    Barbierato, E., & Zamponi, M. E. (2022). Shifting Perspectives on AI Evaluation: The Increasing Role of Ethics in Cooperation. AI, 3(2), 331–352. https://doi.org/10.3390/ai3020021

    Bloomfield, B. P., & Vurdubakis, T. (1997). THE REVENGE OF THE OBJECT? On Artificial Intelligence as a Cultural Enterprise. Social Analysis: The International Journal of Social and Cultural Practice, 41(1), 29–45. http://www.jstor.org/stable/23171730

    Bory, P. (2019). Deep new: The shifting narratives of artificial intelligence from Deep Blue to AlphaGo. Convergence, 25(4), 627–642. https://doi.org/10.1177/1354856519829679

    Ensmenger, N. (2012). Is chess the drosophila of artificial intelligence? A social history of an algorithm. Social Studies of Science, 42(1), 5–30. http://www.jstor.org/stable/23210226

    Espeland, W. N., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate social worlds. American Journal of Sociology, 113(1), 1–40. https://doi.org/10.1086/517897

    Hankey, A. (2021). Kasparov versus Deep Blue: An Illustration of the Lucas Godelian Argument. Cosmos and History: The Journal of Natural and Social Philosophy, 17(3), 1–8.

    Kasparov, G. (2017). Don’t fear intelligent machines. Work with them. TED. Retrieved October 11, 2022, from https://www.ted.com/talks/garry_kasparov_don_t_fear_intelligent_machines_ work_with_them.

    Knorr-Cetina, Karin. (1999). Epistemic cultures : how the sciences make knowledge. Harvard University Press.

    McLuhan M (1964) Understanding Media: The Extensions of Man. Toronto: McGraw-Hill. Prost, F. (2012). On the impact of Information Technologies on society: An historical perspective through the game of chess. EPiC Series in Computing. https://doi.org/https://doi.org 10.48550/arXiv.1203.3434

    Stanlaw, J. (1999). [Review of Vygotsky and Cognitive Science: Language and the Unification of the Social and Computational Mind, by W. Frawley]. Language, 75(1), 161–163. https:/ doi.org/10.2307/417489

    Seirawan, Y., Simon, H. A., & Munakata, T. (1997). The implications of Kasparov vs. Deep Blue. Communications of the ACM, 40(8), 21–25. https://doi.org/10.1145/257874.257878

    Woodward, K. (2013). A feeling for the cyborg. In: Mitchell, R. and Thurtle, P. (eds) Data Made Flesh: Embodying Information. New York: Routledge, pp. 181-197.