How Developments in Computing and Artificial Intelligence Will Symbiotically Influence Each Other, pt. 1

The following was my Extended Essay, written senior year of high school.

Abstract:  This paper is an investigation of the checks and balances present when attempting to develop artificial intelligence or computational speed in a manner in which each field is independent from the other. The examples of computer gaming, quantum computing, neural networks, the singularity, and the one machine, among others, are used. These examples are used to lead to the conclusion that it is difficult to improve in either without simultaneously developing the other.

The modern theory of artificial intelligence first began at a conference at Dartmouth College in 1955. The ENIAC, often considered to be the first programmable computer, was built only 9 years prior to this in 1946. The connection between artificial intelligence and computing becomes immediately apparent. Even though the computers that existed did not even have a fraction of the power needed to execute the concepts that were discussed at the Dartmouth Conference; discussed they were. The discussion has not stopped since. As newer, faster computers are constantly developed that are capable of running programs that can simulate environments in which intelligence develops and occurs, the bridge is quickly being gapped between a CPU and the human brain.

But perhaps the true beginnings of artificial intelligence stretch much further back. In the 5th century B.C, Aristotle defined syllogistic logic, and set into motion the process of discovering deductive reasoning as a whole. Essentially, syllogistic logic is a method of proving something based on two other known truths (ie, All programming languages are based on logic. HTML is not based on logic. Therefore, HTML is not a programming language) This is important not only because logic and reasoning are two of the basest concepts involved in computer programming, but also because this development in logic was also a development in unraveling how the human mind processes and uses information. It is also important to develop the basis of deductive logic here, because this will be the definition of “artificial intelligence” for the rest of the paper. It is important that artificial intelligence is understood to not just be any blatant calculation or simple executable algorithm. If that were true, the argument would have to be made that any simple machine that can perform a basic mathematical calculation would be, in some way, intelligent. True artificial intelligence is the ability to deductively reason. An algorithm that could complete a statement based on, to use the above example, syllogistic logic, could be considered intelligent, if only very basically. To develop artificial intelligence, a clear and profound understanding of actual intelligence is no doubt necessary.

Intelligence is defined as the ability to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn. By breaking down the hypernym of intelligence into its more definable subsets, it is easier to define what artificial intelligence would actually have to encompass to be considered fully intelligent. It would have to be able to reason, that is, not only know a fact but find reasons to believe in that fact. It would have to be able to make up multi step processes on its own, ie, to plan. It would have to problem solve, or find solutions to conundrums. It would have to be able to generalize and make abstractions, and therefore be able to understand full concepts or ideas without needing to consider the entire concept or idea all at once (an impossible feat indeed.) Lastly, the algorithm would have to be able to use a language of its own, with syntax and grammar, and use this language to learn about the world around it. Is all this possible? If it is, it certainly goes without saying that extraordinarily fast processes would need to be used for all of this to come into play at once. We use all of these facets of intelligence every second as humans, and an artificially intellectual system would have to be able to do the same to become fully capable of thought.

Many computer scientists hypothesize that the complex algorithms that need to be run to properly imitate intelligence would require computers much more powerful than the silicon-based ones that we have today. Moore’s law states that the number of transistors on chip will double approximately every 18 months. It has held true since 1965. This exponential increase in processing power has resulted in not only more powerful chips, but smaller chips as well. This has allowed computing to enter every aspect of our lives, from faster home computers to watches that are able to connect to the internet and deliver the weather to our wrist. Most scientists agree that Moores law will not be sustainable for much longer, due to the fact that if silicon chips are made any smaller than they are already, they will loose a great deal of power and accuracy. Phenomena like quantum tunneling result when the silicon walls in the circuit are made too thin and electrons are able to leak out of the system, causing the system to loose power and efficiency. This means that transistors actually loose efficiency after they get too small, because they begin to leak electrons. Many scientists think that this distinction is also irrelevant because of the enormous potential of emerging non-silicon computer systems.

This will continue in pts. 2 and 3, in which I will discuss quantum computing and compare human cognitive deduction to computational abilities.

Notes

  1. liminal-absurdism posted this