06 December 2010

Reflections

The Epilogue section of Dennis Shasha's and Cathy Lazere's book Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists starts with a question - "What qualities of personality and intellect make great computer scientists?" (243). The authors go on to discuss all the similarities, differences, and peculiarities of each of the fifteen computer scientists they wrote about in their book. Some are very different, some are very similar, and some are truly very unique. As I finish this book, I would like to take note of all the qualities of personality and intellect I thought were emphasized in Out of Their Minds and reflect on them as key to success in science, mathematics, computer science, and engineering. Here is what I have so far:
  • Technical and Curious: A love of the mathematics and the sciences is a necessity - one that cannot be overlooked.
  • Industrious and Flexible: The ability to give it your all 110% of the time is extremely important in this discipline. Then, adapting to the always changing landscape of the discipline quickly becomes crucial in being successful, and after you've adapted, it is time to give your 110% again.
  • Accurate and Presice: Attention to detail is key in many respects - from simply doing research to developing very large-scale systems with a vast amount of necessary detail.
  • Focused and Goal-Oriented: Research and development can often be very frustrating. Therefore, an exceptional amount of dedication is required in order to succeed in science.
  • Ethical and Aware: Technologies created by engineers and scientists always have alternate impacts on a lot of people. Considering the state of the society during technological development will help create a better and more peaceful world for all of us.
  • Team Player: Engineers clearly never work alone. Understanding your team and keeping open lines of communication while staying on task will always make the job easier and more fun.
  • Risk Taker: Science and engineering are all about taking risks, so a successful scientists will learn to do this quickly, as this is the process by which a lot of theses are formulated and a lot of cool, useful things are discovered. 
  • Lucky: Being in the right place in the right time never hurt anyone. For some of the fifteen great computer scientists the book discussed, luck is exactly what made their careers.
As my personal career goes on, I will strive to enhance the skills I currently have and develop the skills I currently lack. This book and the stories of these great intellectuals has certainly helped inspire me to work hard and strive for more. Their stories, each unique and irreplaceable in the world of computing, worked to show the great successes that occurred in the past and what great challenges lie ahead of us in the future. The work of these people has had a lasting impact on our future, and that is evident even today in our daily lives, as most of us now regularly come into contact with computers every day. Furthermore, the future that the book lays out for us is a bright one, and I am curious to see what kinds of things we will come up with. One thing is surely true: our technological future will be spectacular. SIC ITUR AD ASTRA!

27 November 2010

The Winner of a Twenty-Year Bet

"How many people have in their lives a 2 to 10 percent chance of dramatically affecting the way the world works? When one of those chances comes along, you should take it."
- Douglas Lenat
From a very early age, science has been an outlet for curiosity for Douglas B. Lenat, born in Philadelphia in 1950, and fortunately for Lenat, science is a theme that resonated throughout his life. Although he did not receive very good primary education, he had a natural talent. In 1967, for example, he became a finalist in an International Science Fair for his work on the closed form definition of the nth prime number which was judged by a company of scientists, researchers, and engineers. In 1968, Lenat entered the University of Pennsylvania initially pursuing a degree in physics and mathematics. He quickly changed his mind however after taking a course in 1971 with John W. Carr III, a computer science professor at the University of Pennsylvania who had introduced Lenat to artificial intelligence. The research in the field was just starting out, so Lenat decided to pursue it. In 1972, he attended CalTech for a PhD program in applied mathematics and computer science but promptly realized that a lot of work on artificial intelligence at the time was done at Stanford, so he transferred in order to begin his work with McCarthy. Unfortunately, McCarthy left on sabbatical for MIT the same fall that Lenat arrived at Stanford, so instead, Lenat ended up working with Cordell Green on the theory of automatic programming that tested falsifiable hypotheses. Falsifiable hypotheses were an extremely important part of artificial intelligence because AI has to "interact directly with the external world" (228). Therefore, purely mathematical or logic predictions about these expert systems have to be tested. Lenat also applied these theories, and all the heuristics theories, to his doctoral thesis - a modest Lisp program that worked out various mathematical concepts named Automated Mathematician (AM); to this day, this thesis remains "one of the most original AI programs ever written" (229). After he had received his PhD in 1976, Lenat developed on AM with a project named Eurisko which brought a lot of interesting applications - including circuit design and minimization, solutions to tactical game problems, and searches for missed loopholes in game situations - to the table but never took off in the markets because it could not be extended to other various useful domains. In 1984, Lenat left the academia to pursue a business opportunity embodies by the project Cyc, a piece of artificial intelligence which possesses a large part of the entire existing body of knowledge. A lot of research, design, and implementation have been put into the project, but its success have yet to be seen. The project is currently considered to be one of the most controversial in the discipline, yet Lenat still "believes that AI projects like Cyc can become 'knowledge utilities'" in the future (242). Social reception of the technology cannot yet be considered; a much better question is - will Cyc actually work? According to Shasha and Lazere, yes, but only in part, due to various flaws in the current approach to expert systems and lack of understanding of the inner workings of the human mind. With all this evidence in mind, it is clear why Lenat remains "the boldest kind" of explorer of artificial intelligence leading the way to a bright future in the world of computing (242).

26 November 2010

The Keeper of the Power of Knowledge

"There are three important things that go into building a knowledge-based system: knowledge, knowledge, knowledge. The competence of a system is primarily a function of what the system knows as opposed to how well it reasons."
- Edward Feigenbaum
Edward A. Feigenbaum's story begins with a tragedy. He was born in Weehawken, NJ in 1936, and just before his first birthday, his father tragically passed away. Feigenbaum's stepfather, an accountant of a small local bakery, was in turn charged with the job to ignite his interest in science and technology, and so he did by taking young Feigenbaum to the Hayden Planetarium in New York City once a month to all the new exhibits. In 1952, Feigenbaum started his college career at the Carnegie Institute of Technology (now known as Carnegie Mellon University) majoring in electrical engineering, per his parents' request. Computer science did not yet exist for the average undergraduate at Carnegie, so Feigenbaum "began taking courses at Carnegie's then new Graduate School of Industrial Administration" (210). These courses, via professor James March, were the first to introduce him to the ideas of game theory and a lot of other work done by a Hungarian mathematician John von Neumann. Feigenbaum also had a rare opportunity to attend a course read by Herbert Simon, a professor at Carnegie in the fields of political science, sociology, and economics, as well as a former federal  administrator for the Marshall Plan, on mathematical models in social sciences. One day,
Simon, and his co-lecturer Allen Newell, announced that they had invented "a thinking machine" called "the Logic Theorist" and handed out user manuals for the IBM 701 (211). Feigenbaum took the manual home, read it, and finally realized what he wanted to do. The idea of the Logic Theorists was interesting: the "...program attempted to discover proofs in propositional logic" based on some other logic that is already known to the program using educated guessing problem-solving technique formally called a heuristic by a Hungarian mathematician George Polya (212). Fascinated by these ideas, Feigenbaum stayed at Carnegie with the School of Industrial Administration until 1956 when he graduated with his PhD in electrical engineering. His doctoral thesis involved more work with the Logic Theorist as he attempted to further model human problem-solving abilities, such that he could draw some conclusions about human problem solving. It turned out to be a very hard problem, but it was completed under the name Elementary Perceiver and Memorizer (EPAM), and it is still used today at Carnegie Mellon. More specifically, the program modeled how humans are able to memorize pairs of unrelated, nonsense words in a stimulus-response setting. The process included a training portion and a testing portion, and from a psychology standpoint, provided a lot of insights into the working and abilities of short-term memory. This research lead Feigenbaum first to the University of California at Berkeley and then, eventually, to Stanford where John McCarthy was doing his work with artificial intelligence in 1965. At Stanford, Feigenbaum began to formulate his thoughts about expert systems. In a collection of papers Computers and Thoughts that he co-edited with a colleague Julian Feldman, he first began advocating for further exploration of computer-based processes of induction. In 1964, Feigenbaum, Joshua Lederberg, the chairman of the Stanford genetics department, and a Stanford chemist Carl Djerassi, began their work on a joint project Dendral which attempted to develop a "Mars probe that would land on the surface of the red planet and explore for life or precursor molecules" (216). The project, which a year later had been declared successful, is considered to be the the world's first true expert system capable of determining chemical structure of molecules even better than most humans could. This project also laid out the framework for expert systems in general: "a set of data, a set of hypotheses, and a set of rules to choose among the hypotheses" (218). Soon, the company developed all kinds of other expert systems including Mycin, which was meant to help doctors diagnose infectious diseases and recommend treatment to numerous patients, and airline management systems, which supported airport traffic controllers. In the end, the idea of standardized knowledge turned out to be key to the expert system structure. The more knowledge exists in the system, the better, more efficient, and simply smarter the expert system can be. Despite the extensive work that Feigenbaum did in this area, expert systems are in their developing stages in the world of computer science, but even Feigenbaum himself believes that "the expert system will gain its rightful place as an intelligent agent that can cooperate with people to solve some of the world's more challenging problems" (222).

Artificial Intelligence


23 November 2010

The Biologist of Computing

"Clearly, the organizing principle of the brain is parallelism. It's using massive parallelism. The information is in the connection between a lot of very simple parallel unit working together. So, if we built a computer that was more along that system of organization, it would likely be able to do the same kinds of things the brain does."
- Daniel Hillis
W. Daniel Hillis's interests in science and technology came from his parents. His father was an epidemiologist while his mother was very interested in mathematics, and both his parents went to great lengths to instill curiosity in these subjects. Hillis's curiosity and craftiness enhanced his experience with technology during his youth. His first real exposure to the world of digital computing was in the late 1960s when he had the chance to look at George Boole's An Investigation of the Laws of Thought (1854) which outlined the principles of elementary Boolean algebra. After toying with these ideas, he eventually learned to program. In 1974, however, he entered MIT "determined to find out how the brain worked" planning to major in neurophysiology (192). At MIT, Hillis met Marvin Minsky and John McCarthy and fully discovered the vast world of computing and started working at MIT's Artificial Intelligence lab in the LOGO group on a project involving computer technologies that followed the evolutionary principle of emergence which states that "interacting agents will adapt, through a process of selection, a mechanism for survival" (193). Soon, Hillis' idea for the Connection Machine was born. In attempt to mimic the massive parallelism used by the brain, the Connection Machine was designed to be a computer made up of thousands of processors all linked together each with its own control and its own memory. The machine was connected, initiated, and set "free" to run in hopes to discover the emergence of new smarter technology from the pre-existing one. A perfect project for a lover of both biology and computing, the Connection Machine was the perfect project for Hillis, but it is clear that the Machine is only the beginning to our understanding of evolutionary trends in computing.

22 November 2010

The Driver of the Digital Fast Lane

"Speed is exchangeable for almost anything. 
Any computer can emulate any other at some speed."
- Burton Smith
Burton J. Smith's career had a very rough start. Smith, born in 1941 in Chapel Hill, NC, moved with his family to New Mexico when his father, a professor of chemistry, was offered at job as the head of the University of New Mexico's chemistry department. Smith was constantly fascinated by technology, but only after he came back from the military did he know what exactly he wanted to do - design of electronic devices. Therefore, he graduated in 1968 with a B.S. in electrical engineering from the University of New Mexico and went on to MIT where he completed his doctorate in 1972. A lot of Smith's work from then on was focused on optimizing the hardware structures that support the newly implemented pipelining process - both pipeline parallelism and multiprocessor parallelism - used in computation. Smith and his colleagues at Denelcor, a small computing company based in Denver, Colorado, strove to create a supercomputer which would employ high-efficient parallel processing. In other words, they wanted "to design a machine that would perform and operation as soon as its inputs were ready" (180). They called this approach, which was first developed by Jack Dennis of MIT in the 1970s, "dataflow architecture" (180). After some time in development, it was clear that this approach has a very significant impact on the discipline, as dataflow architecture is also applied to digital memories and networks of all kinds, just to name a few. Smith's success eventually followed him to the Tera Computer Company in Seattle, WA where he had another revelation that boosted performance of the pipelining process - "different operations within a task may sometimes be executed out of order" (186). This idea significantly sped up data processing in computers and ultimately led us to the modern-day process.

21 November 2010

Computer History Museum - The IBM System/360 Revolution

The Inventor with a Delight in Making Things Work

"It is better to take a driving problem that is 
someone else's problem, because it keeps you honest."
- Frederick Brooks
With great abilities and interest in engineering and science, Frederick P. Brooks, Jr., born in Durham, NC in 1931, started his career at Duke University with a double major in physics and mathematics. He graduated in 1953 and realized that he was much more drawn to "a younger field" than physics, so he decided to pursue computer science. He did so at Harvard University under the leadership of Howard Aiken, a pioneer in computing and one of the original designers of the Harvard Mark I computer used during wartime to calculate "the trajectories of battleship artillery shells based on" various criteria (162). He graduated in 1956 and joined IBM to work on what was then considered an "extremely ambitious" project called the Stretch computer, the world's fastest (at the time) supercomputer that utilized many of the same concepts that are used to implement modern computer architecture (163). Eventually, Brooks became the project manager of the development of the IBM System/360 line of computers. As a part of that project, he and his team received the opportunity to research and improve the OS/360 software package which was used on the system. A direct descendant of the OS/360 now runs on the majority of the big IBM mainframes. In 1964, Brooks returned to his home state upon receiving a job opportunity at the University of North Carolina in Chapel Hill where he founded the Department of Computer Science, which at the time was the second one in the entire nation, that same year. The department focuses its research efforts on three-dimensional, real-time computer graphics, computer vision, and virtual reality. In other words, it is what Brooks calls "intelligence amplification," and he explains:
"The artificial intelligence approach is to replace the mind. Our approach is always to have the mind at the very center of the system. Now the artificial intelligence community has come around to this idea after twenty-five years. But that wasn't where they started out. They used to say, 'We're going to be able to solve these problems. You don't need a mind.' In fact, you do need a mind."
His approach may not be a conventional, but it is certainly a good one, and Brooks is right - people will probably always be better than computers in terms of intelligence. The research to make them smarter continues, but we must never forget that computers are only tools for people to use and there are always much more important things to consider.

20 November 2010

The Investigator of Time, Space, and Computation

"A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable."
- Leslie Lamport
Leslie Lamport's initial interests were not simply in science; they were in Einstein's theory of special relativity. Born in 1941 in New York City, Lamport graduated first from MIT with a B.S. in mathematics, then from Brandeis with a PhD in the same field. After graduation in 1972, Lamport continued his work with COMPASS (Massachusetts Computer Associates), the company that was "contract[ed] to write a Fortran compiler for the ILLIAC IV" (125). While with COMPASS, Lamport had the chance to do some research on the variety of algorithms. At that point, Dijkstra's many algorithmic solutions for computer science problems were widely publicized, and Lamport was able to not only use these algorithms but also optimize them. Lamport's Bakery Algorithm and his work on atomic registers provided many software and hardware solutions in industry application, especially those dealing with early integrated circuitry. His work on distributed systems also made a lasting impact in computer science, as today these algorithms are used to govern networks and larger applications that employ multithreading. Lamport's another invention famous only in the tech community is LATEX, a formatting system which he implemented as a hobby in his spare time. Today, "close to 75% of all computer science papers are written in LATEX." If that is not a clear, lasting impact on the discipline, what is?

16 November 2010

The Seeker of Good Structure

"I visualize structures, graphs, data structures.
It seems to come easier than a lot of other things."
- Robert Tarjan
Robert E. Tarjan, born in Pomona, CA in 1948, is the generator of "good ideas", and he managed to come up with quite a few of them through his remarkable career in computer science. Interested in science and mathematics from a very early age, Tarjan was always a big fan of precise and rigorous proofs and, in high school, would often doubt his professors if they did not completely explain the reasoning behind their math. This mindset led Tarjan first to Caltech, then to Stanford which he graduated in 1972 with a PhD in computer science and a minor in mathematics. This doctoral thesis and most of his early work focused on graph-based planarity algorithms, which led Tarjan to the research and development of depth-first search algorithms and the data structures that stem from these. Tarjan's latest work was in the development of persistent data structures "in which you can keep track of previous versions as well as most recent version" of state systems and "do [so] efficiently, without copying [] entire data structure[s]" of data every time the program runs (117). Persistent data structures allow for the development of recently discovered applications in computational geometry and parallel processing called temporal databases which "are designed to recreate snapshots of the past quickly and efficiently" (118). All the numerous algorithms that Tarjan had the opportunity to work on embody his legacy while he continues his work in computer science, with some advice to present-day programmers: "What do you need to be successful? You need brains but you also need stick-to-itiveness. Many tries at a solution can fail, but then on the last try something magical happens," and that magical phenomenon is worth it - every time (119).

The Art of Programming

The Capturer of the Boundless Interest

"Computer programming is an art form, like the creation of poetry or music."
- Donald Knuth
Even Shasha and Lazere wonder if Donald E. Knuth, born in 1938 in Milwaukee, WI, is "just one person" (89). Knuth's career officially started in 1957 when his first piece for the Westinghouse Science Talent Search on the "potrzebie system of weights and measures" was published in the Mad Magazine. He was only 19 years old at the time and still had no idea that he might be interested in computing. He studied mathematics during his undergraduate years at Case Western Reserve in Cleveland, Ohio, which is also where he taught himself basic programming skills. In 1960, he graduated from Case receiving a master's degree in mathematics at the same time and went on to California Institute of Technology for a PhD in the same field. After completing his PhD work, Knuth stayed as a member of the Caltech faculty now working closely with the Burroughs Corporation, alongside Dijkstra and other pioneers in the field, on hardware and software efficient problems for new-found programming languages like Algol 60. First, Knuth's work focused on compilers. More specifically, he worked on many of the tools used today to write compilers (and yes, because of Knuth, all computer science majors today have to take a compilers course at some point during their undergraduate careers). His work in this area was the perfect segue into his later work on more specific parsing problems and attribute grammars. His early work with attribute grammars branched from the recently discovered Backus-Naur form and other grammar interpretation algorithms while his later work dealt with precise analysis and automation of problem solving the results of which included the famous Knuth-Bendix algorithm on axiom-based confluent term rewriting systems. All of Knuth's work has been described comprehensively and completely in a 7-volume publication The Art of Computer Programming which, to this day, remains Knuth's legacy and his biggest contribution to the field of computer science.

11 November 2010

The Proponent of the Possibilities of Chance

We should give up the attempt to derive results and answer with complete certainty.
- Michael O. Rabin
From the very beginning, Michael Rabin, born in Breslau, Germany in 1931, was a brilliant mathematician. While attending Hebrew University in Israel in the early 1950s, he was especially fond of Alan Turing's early work; it was Turing, in fact, who made him realize that he "was going to be interested in logic, actually computability." Turing built the foundation not only for Rabin but also for so many after other - he proposed a definition of what is computable, he introduced the idea of a machine's "state of mind", and he defined the realms of computability. Rabin took an advantage of that. While pursuing his PhD in mathematics, more specifically logic, at Princeton University, he and another young graduate student Dana Scott has the opportunity to work with IBM Research for one of the summers. Rabin and Scott were left to do whatever they found interesting, and soon, the two proposed "a notion of a computer that could 'guess' solutions" which quickly evolved into the miniature computers more commonly known as finite state machines (73). Finite state machines (FSMs) comprise the most fundamental structure in mathematics, computing, computer science, and electronics engineering; today, they help solve difficult computation problems, process digital signals, and make up controllers of virtually every conceivable type. Rabin's work did not stop with FSMs, though, as he continued research in a different field - error in computation and how some of very difficult problems can be solved without obtaining an answer with complete certainty. The idea is revolutionary, especially at the time, and would become a fitting precursor to many other inventions in computer science, such as RSA encryption. Rabin's stellar career closed with yet another idea and yet another outline of how to proceed to make a computer all it can be: "I don't think this [difference in the way memories - computer vs. human - operate] has to do with a difference between the power of the mind and the power of the computer. It is simply that we don't know how to write a computer program to do it" (88).

10 November 2010

The Inventor of the Appaling Prose and the Shortest Path

"I asked my mother [a mathematician] whether mathematics was a difficult topic. She said to be sure to learn all the formulas and be sure you know them. The second thing to remember is if you need more than five lines to prove something, then you're on the wrong track."
- Edsger Dijkstra
A very refreshing perspective on the field of computer science comes to us from a Dutch man named Edsger W. Dijkstra, born in Rotterdam in 1930. A graduate of the Gymnasium Erasminium, an elite traditional-education high school, and an alumnus of the University of Leiden, where he pursued a degree in theoretical physics, Dijkstra had always had a radical and very rigorous approach to mathematics and eventually computer science. His work on the shortest path problem and the dining philosophers problem shows exactly that. The most admirable thing however is that he did his work - and advocated for the discipline of computer science - when computer science, certainly not its theoretically-demanding aspects, did not yet exist, or at least were not recognized within the scientific community at that point in time. He is "the creative scientist whose love of good problems and enduring solutions have made enormous contribution to the science and practice of computing," and from his point of view, we should take some advice, which Dijkstra himself put into "three golden rules[:]
  1. Never compete with colleagues.
  2. Try the most difficult thing you can do.
  3. Choose what is scientifically healthy and relevant. Don't compromise on scientific integrity."
and never lose sight of the goal (67).

08 November 2010

The Uncommon Logician of Common Sense

"If you want the computer to have general intelligence, the other structure has to be commonsense knowledge and reasoning."
- John McCarthy
The inventor John McCarthy, born in Boston in 1927, was always fascinated by logic and common sense. His interests led him to California Institute of Technology in 1944 where he got his undergraduate degree in mathematics. After a lecture by John von Neumann at the Hixon Symposium on Cerebral Mechanisms in Behavior at Cal Tech on self-replicating automata, McCarthy was left fascinated. The following year he transferred to Princeton University and began his PhD work in mathematics there. His thesis involved modeling "human intelligence on a machine" (23). In 1952, McCarthy, with some help from his fellow graduate students, decided to collect papers on the subject from all those who were interested in this research. Claude Shannon, the inventor of mathematical theory of communication, which is also known as information theory, worked with McCarthy to formalize the project under the name The Dartmouth 1956 Summer Research Project on Artificial Intelligence. According to Shasha and Lazere, this project proved to be a groundbreaking event in computer science, "the ambitious goal for [which]...was to...'Proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it'" (25). The conference did not resolve many of the questions posed by the attendees, but a foundation for artificial intelligence in computer science was formed. Soon, list structures in logical reasoning, Lisp, situational calculus, and elaboration tolerance were researched and developed all in an attempt to reach McCarthy's major goal - "to make a machine that would be as intelligent as a human" (30). Computer scientists today are still working on it, but McCarthy has certainly laid down quite a foundation. The goal may be set very high, but the task is doable, and the difficulties uncovered by McCarthy and his colleagues [will] serve as 'keep off' warnings" for generations of programmers to come (36).

06 November 2010

Alan Kay Shares a Powerful Idea About Ideas

The Clear Romantic Visionary

"All understanding begins with our not accepting the world as it appears." 
-Alan C. Kay
For Alan Kay, it all started with a 1945 article in the Atlantic Monthly about Vannevar Bush's proto-computer called the differential analyzer. Kay, born 1940 in Springfield, Massachusetts, was brilliant from the very beginning having learned to read early and having read a few hundred books by the time he started school. This brilliance, and a lot of opportunity, got Kay to Brooklyn Technical High School, then Bethany College in West Virginia, and then the Air Force. By 1961, he worked as a programmer for the Air Force where he had the chance to work with data in independent procedure bundles that only had to keep track of the data that was relevant to the bundle. "The idea that a program could use procedures without knowing how the data was represented struck Kay as a good one," and he later followed up on the concept with his work on objects and object orientation. In 1962, he left the Air Force and matriculated into the mathematics department at the University of Colorado, which he graduated by 1966 with a double major in mathematics and molecular biology. Then, Kay finally decided to try computer science, so he went to the University of Utah and started working on his PhD in computer science there. At Utah, his search for a way to implement powerful, intricate, and very involved systems from simple building blocks has begun. Can a program support a bunch of instances of an object that conformed all the behavior described in a master set? Can these instances be different? How would the programmer differentiate between them? How would message be passed between the instances and to the controller cells? "The ability to start with an idea and see it through to a correct and efficient program is one prerequisite for a great software designer," and Kay certainly proved that he was one with his formulation of object orientation, an answer to all the questions above as posed by Kay himself (46). While object-oriented programming, the most famous concept of computing, may embody Kay's legacy, Kay's work in education is his real contribution to the world. From the Vivarium to the Dynabook to the $100 computer, the classroom has been single-handedly transformed, and in the lives and education of so many children in the United States Kay has made a difference.

02 November 2010

The Restless Inventor

"We didn't know what we wanted and how to do it. It just sort of grew. The first struggle was over what the language would look like. Then how to parse expressions - it was a big problem and what we did looks astonishingly clumsy now..." 
- John Backus on the Invention of Fortran
John Backus, born December 3, 1924 in Philadelphia, PA, was an inventor "motivated less by necessity than by sheer irritation at imprecision or inefficiency" (5). A brilliant man with many interests and a lack of direction in life, he was the inventor of Fortran, the first high-level computer language in history. His other major career accomplishments include the development of Speedcoding, then Algol, and then, the Backus-Naur form, which is an outline of rules for context-free grammars that are often used to describe the syntax of computer languages. Shasha and Lazere state that "Backus had invented one of the world's first and most popular programming languages and had developed a notation that would permit the definition of over a thousand more. Many people, even many great scientists, might have coasted after such achievements. Not Backus. He wasn't sure he liked what he had done" (17). It may seem counter-intuitive at first, but the book made it clear that only a person with an extraordinary understanding of both computers and those who will be working on them and with a truly extraordinary drive would keep going. From this attitude came the development of function-level programming (FP), which allows programmers to use computer language to describe more of what they wanted to get done without getting involved in how it was going to be done by the hardware. Still, FP had its quirks. Upon his retirement, Backus withdrew from the field, and I agree with the book: "if he had not solved the problem [of creating the perfect computer programming language], he has posed it beautifully," and now, other programmers have an opportunity to solve this problem and take the world of computer science to a new level (20).

01 November 2010

A Comprehensive Overview of the History of Computing

Entering Silicon Valley

The Fundamental Question

Backus. McCarthy. Kay. Dijkstra. Rabin. Knuth. Tarjan. Lamport. Cook. Levin. Brooks. Smith. Hillis. Feigenbaum. Lenat. Do any of these names sound familiar? To most people, probably not - yet, each of these names short-hands a pedantic search, an avant-garde idea, a progressive body of thought. Most started their work in the 1930s with a simple math problem. Their heads in the game and their minds focused, these men were very forthright in asking not whether or not it can be computed but how to do so most efficiently. The computer was both the starting point and the core of the problem, which “required writing instructions for a real machine, finding an efficient solution, building a better computer for bigger versions of the problem, and, sometimes, [boldly] asking the computer to participate in the creative process” (ix). When they started, they started with nothing, yet, “modern computing would be unrecognizable without their contributions” (x). This blog, with the help of Dennis Shasha’s and Cathy Lazere’s book Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists, will strive to examine the lives of 15 world-famous computer scientists – their interests, their questions, their environment, their discoveries, and their legacies. Just as the book, I will “try to explain the ideas and their importance without scientific jargon, so you need not have any special background, other than curiosity about how computing has evolved and how this special breed of scientists thinks” (x). Indeed, it will be a terrific journey through the history of computing. Enjoy!

Shasha, Dennis, and Cathy Lazere. Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists. New York, NY: Copernicus, 1995. Print.