06 December 2010

Reflections

The Epilogue section of Dennis Shasha's and Cathy Lazere's book Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists starts with a question - "What qualities of personality and intellect make great computer scientists?" (243). The authors go on to discuss all the similarities, differences, and peculiarities of each of the fifteen computer scientists they wrote about in their book. Some are very different, some are very similar, and some are truly very unique. As I finish this book, I would like to take note of all the qualities of personality and intellect I thought were emphasized in Out of Their Minds and reflect on them as key to success in science, mathematics, computer science, and engineering. Here is what I have so far:
  • Technical and Curious: A love of the mathematics and the sciences is a necessity - one that cannot be overlooked.
  • Industrious and Flexible: The ability to give it your all 110% of the time is extremely important in this discipline. Then, adapting to the always changing landscape of the discipline quickly becomes crucial in being successful, and after you've adapted, it is time to give your 110% again.
  • Accurate and Presice: Attention to detail is key in many respects - from simply doing research to developing very large-scale systems with a vast amount of necessary detail.
  • Focused and Goal-Oriented: Research and development can often be very frustrating. Therefore, an exceptional amount of dedication is required in order to succeed in science.
  • Ethical and Aware: Technologies created by engineers and scientists always have alternate impacts on a lot of people. Considering the state of the society during technological development will help create a better and more peaceful world for all of us.
  • Team Player: Engineers clearly never work alone. Understanding your team and keeping open lines of communication while staying on task will always make the job easier and more fun.
  • Risk Taker: Science and engineering are all about taking risks, so a successful scientists will learn to do this quickly, as this is the process by which a lot of theses are formulated and a lot of cool, useful things are discovered. 
  • Lucky: Being in the right place in the right time never hurt anyone. For some of the fifteen great computer scientists the book discussed, luck is exactly what made their careers.
As my personal career goes on, I will strive to enhance the skills I currently have and develop the skills I currently lack. This book and the stories of these great intellectuals has certainly helped inspire me to work hard and strive for more. Their stories, each unique and irreplaceable in the world of computing, worked to show the great successes that occurred in the past and what great challenges lie ahead of us in the future. The work of these people has had a lasting impact on our future, and that is evident even today in our daily lives, as most of us now regularly come into contact with computers every day. Furthermore, the future that the book lays out for us is a bright one, and I am curious to see what kinds of things we will come up with. One thing is surely true: our technological future will be spectacular. SIC ITUR AD ASTRA!

27 November 2010

The Winner of a Twenty-Year Bet

"How many people have in their lives a 2 to 10 percent chance of dramatically affecting the way the world works? When one of those chances comes along, you should take it."
- Douglas Lenat
From a very early age, science has been an outlet for curiosity for Douglas B. Lenat, born in Philadelphia in 1950, and fortunately for Lenat, science is a theme that resonated throughout his life. Although he did not receive very good primary education, he had a natural talent. In 1967, for example, he became a finalist in an International Science Fair for his work on the closed form definition of the nth prime number which was judged by a company of scientists, researchers, and engineers. In 1968, Lenat entered the University of Pennsylvania initially pursuing a degree in physics and mathematics. He quickly changed his mind however after taking a course in 1971 with John W. Carr III, a computer science professor at the University of Pennsylvania who had introduced Lenat to artificial intelligence. The research in the field was just starting out, so Lenat decided to pursue it. In 1972, he attended CalTech for a PhD program in applied mathematics and computer science but promptly realized that a lot of work on artificial intelligence at the time was done at Stanford, so he transferred in order to begin his work with McCarthy. Unfortunately, McCarthy left on sabbatical for MIT the same fall that Lenat arrived at Stanford, so instead, Lenat ended up working with Cordell Green on the theory of automatic programming that tested falsifiable hypotheses. Falsifiable hypotheses were an extremely important part of artificial intelligence because AI has to "interact directly with the external world" (228). Therefore, purely mathematical or logic predictions about these expert systems have to be tested. Lenat also applied these theories, and all the heuristics theories, to his doctoral thesis - a modest Lisp program that worked out various mathematical concepts named Automated Mathematician (AM); to this day, this thesis remains "one of the most original AI programs ever written" (229). After he had received his PhD in 1976, Lenat developed on AM with a project named Eurisko which brought a lot of interesting applications - including circuit design and minimization, solutions to tactical game problems, and searches for missed loopholes in game situations - to the table but never took off in the markets because it could not be extended to other various useful domains. In 1984, Lenat left the academia to pursue a business opportunity embodies by the project Cyc, a piece of artificial intelligence which possesses a large part of the entire existing body of knowledge. A lot of research, design, and implementation have been put into the project, but its success have yet to be seen. The project is currently considered to be one of the most controversial in the discipline, yet Lenat still "believes that AI projects like Cyc can become 'knowledge utilities'" in the future (242). Social reception of the technology cannot yet be considered; a much better question is - will Cyc actually work? According to Shasha and Lazere, yes, but only in part, due to various flaws in the current approach to expert systems and lack of understanding of the inner workings of the human mind. With all this evidence in mind, it is clear why Lenat remains "the boldest kind" of explorer of artificial intelligence leading the way to a bright future in the world of computing (242).

26 November 2010

The Keeper of the Power of Knowledge

"There are three important things that go into building a knowledge-based system: knowledge, knowledge, knowledge. The competence of a system is primarily a function of what the system knows as opposed to how well it reasons."
- Edward Feigenbaum
Edward A. Feigenbaum's story begins with a tragedy. He was born in Weehawken, NJ in 1936, and just before his first birthday, his father tragically passed away. Feigenbaum's stepfather, an accountant of a small local bakery, was in turn charged with the job to ignite his interest in science and technology, and so he did by taking young Feigenbaum to the Hayden Planetarium in New York City once a month to all the new exhibits. In 1952, Feigenbaum started his college career at the Carnegie Institute of Technology (now known as Carnegie Mellon University) majoring in electrical engineering, per his parents' request. Computer science did not yet exist for the average undergraduate at Carnegie, so Feigenbaum "began taking courses at Carnegie's then new Graduate School of Industrial Administration" (210). These courses, via professor James March, were the first to introduce him to the ideas of game theory and a lot of other work done by a Hungarian mathematician John von Neumann. Feigenbaum also had a rare opportunity to attend a course read by Herbert Simon, a professor at Carnegie in the fields of political science, sociology, and economics, as well as a former federal  administrator for the Marshall Plan, on mathematical models in social sciences. One day,
Simon, and his co-lecturer Allen Newell, announced that they had invented "a thinking machine" called "the Logic Theorist" and handed out user manuals for the IBM 701 (211). Feigenbaum took the manual home, read it, and finally realized what he wanted to do. The idea of the Logic Theorists was interesting: the "...program attempted to discover proofs in propositional logic" based on some other logic that is already known to the program using educated guessing problem-solving technique formally called a heuristic by a Hungarian mathematician George Polya (212). Fascinated by these ideas, Feigenbaum stayed at Carnegie with the School of Industrial Administration until 1956 when he graduated with his PhD in electrical engineering. His doctoral thesis involved more work with the Logic Theorist as he attempted to further model human problem-solving abilities, such that he could draw some conclusions about human problem solving. It turned out to be a very hard problem, but it was completed under the name Elementary Perceiver and Memorizer (EPAM), and it is still used today at Carnegie Mellon. More specifically, the program modeled how humans are able to memorize pairs of unrelated, nonsense words in a stimulus-response setting. The process included a training portion and a testing portion, and from a psychology standpoint, provided a lot of insights into the working and abilities of short-term memory. This research lead Feigenbaum first to the University of California at Berkeley and then, eventually, to Stanford where John McCarthy was doing his work with artificial intelligence in 1965. At Stanford, Feigenbaum began to formulate his thoughts about expert systems. In a collection of papers Computers and Thoughts that he co-edited with a colleague Julian Feldman, he first began advocating for further exploration of computer-based processes of induction. In 1964, Feigenbaum, Joshua Lederberg, the chairman of the Stanford genetics department, and a Stanford chemist Carl Djerassi, began their work on a joint project Dendral which attempted to develop a "Mars probe that would land on the surface of the red planet and explore for life or precursor molecules" (216). The project, which a year later had been declared successful, is considered to be the the world's first true expert system capable of determining chemical structure of molecules even better than most humans could. This project also laid out the framework for expert systems in general: "a set of data, a set of hypotheses, and a set of rules to choose among the hypotheses" (218). Soon, the company developed all kinds of other expert systems including Mycin, which was meant to help doctors diagnose infectious diseases and recommend treatment to numerous patients, and airline management systems, which supported airport traffic controllers. In the end, the idea of standardized knowledge turned out to be key to the expert system structure. The more knowledge exists in the system, the better, more efficient, and simply smarter the expert system can be. Despite the extensive work that Feigenbaum did in this area, expert systems are in their developing stages in the world of computer science, but even Feigenbaum himself believes that "the expert system will gain its rightful place as an intelligent agent that can cooperate with people to solve some of the world's more challenging problems" (222).

Artificial Intelligence