While I do concede we need more people to become computer scientists from an economic standpoint, I believe viewing computer science education exclusively through an economic lens misses the point. Literacy in computer science is much more than a skill to be hired for. Computer science is a way of thinking about problems and solutions, a way of expressing creativity, and a critical foundation for understanding the high tech world we are quickly creating. To value this field based solely on its economic merit is to argue the only reason to learn computer science is to become a computer scientist. But there is an important distinction to be made between becoming a computer scientist, and learning computer science.
I do not believe everyone should get a degree in computer science and become a computer scientist, I do believe, however, that everyone should learn computer science, at least the foundations of it. In this regard, I believe we ought to stop perceiving computer science as a means to an end, but as an end in itself. The following reasons illustrate why.
It Teaches You How to Think…
Many people, including Bill Gates and Steve Jobs, have claimed, “Computer science teaches you how to think.” But in what ways does computer science teach you how to think exactly?
There is a popular TED Talk by Sir Ken Robinson in which he explains that true genius, creativity, and innovation comes from a type of thought process called divergent thinking. Divergent thinking is the ability to take an object, situation, or concept and map out many different uses or solutions. One example he uses is a paper clip. Most people may be able to conceive of a dozen or so unique uses for a paper clip. But great divergent thinkers may be able to conceive of hundreds of unique uses for it.
In school and in our careers, most of us are trained to think convergently. In math, we are taught there is one right answer, and worse, only one way to find it. In Language Arts we are told the themes a piece of literature portrays, and then required to analyze those themes. In science, we are told what experiments to conduct and given a step-by-step tutorial on how to conduct them. And at work, we are told how we are to do our job because “that is how it has always been done”. These approaches are critical for efficiently compacting the information we need in order to function effectively within society. An unfortunate side-effect, however, is that they often diminish our ability to derive viable alternatives. Furthermore, there are few subjects inherently capable of combating this side effect, but computer science is one.
Computer science not only promotes divergent thinking, it requires it. In computer science there are countless ways to solve a given problem. What’s more, once a solution is derived, there are countless technologies available to implement the solution. Suppose I took a team of four computer scientists and told them, “I have data that I would like to store persistently on a computer. Build me a solution.”
There are two types of resources in the world: modular and static. Modular resources are those that can be integrated into a variety of disparate activities. For instance, electricity is highly modular. It can be used to turn on a lightbulb, power a car stereo, or cool a refrigerator. On the other hand, static resources are those that are confined to very specific uses. For instance, carpet is really only useful for one thing. We often see static and modular resources in organizations too. Some companies may confine their accounting team to do one thing: maintain the financials. Others may view their accounting team modularly and decide to build a cross-functional team of accountants, salespeople, and web developers to allow them to create new and interesting solutions they might never have thought of in a static environment.
Much like divergent thinking, computer science is a paradigm that requires modular thinking. Software that is designed to handle a single, specific case is typically not useful. To illustrate this point, consider a computer program that could only store data about a movie if that movie happened to be “Aladdin”. If we needed to store data about a different movie, we would need to write a whole other program almost identical to the first, but that stored data about that movie rather than “Aladdin”. For every movie we were interested in storing in our database, we would have to copy and paste most of the same code, changing only the exact movie we wished to store in that specific program. This is a static approach.
A better approach would be to write a program that understood the concept of a movie in more generic terms. In this way, any number of “movie” data points could be stored using a single program (as opposed to only allowing for an “Aladdin” data point to be stored). You might be tempted to say, “Obviously you should allow any movie to be stored, not just the movie ‘Aladdin’. I mean jeez, I didn’t know computer scientists lacked that much common sense.” But I am abstracting the underlying details here, and in other, non-trivial circumstances modularity can be much more ambiguous.
Most would agree that critical thinking skills are essential. Being able to objectively analyze problems in order to formulate an effective solution is important in all walks of life – from completing a difficult task at work, to managing and communicating with people, to deciding who to vote for. However, there is a higher level of problem solving that requires not just deriving a solution that is effective, but one that is efficient as well. For most every problem, we do not have unlimited resources at our discretion to solve it. Therefore, we must come up with solutions that may not be optimal, but the best we can do with the resources we are given. This is especially true in computer science.
A fundamental problem in computer science is searching. In the case of most real-world problems worth solving, the search space is too large to either store in memory or traverse all possible paths in a reasonable amount of time. Because of this, algorithms must be developed to search more intelligently in order to find the solution (or best possible solution, if more than one exists) while using limited time and memory. A great example of this is Google’s AlphaGo program that recently beat the world’s Go champion 4-1. The game Go has an estimated search space of 10^170 unique paths (by contrast, Chess has an estimated search space of 10^50 unique paths). With current technology, we would hardly make a dent in this problem if we simply used critical thinking to brute force our way about it. It is algorithmic thinking that made AlphaGo’s victory possible.
It Develops Quantitative and Creative Skills
Now that we understand exactly how learning computer science teaches us how to think, it may be well to ask, “That’s great! But the skills you mentioned seemed to be geared toward people in quantitative fields anyway. Why should everyone learn computer science if the skills you develop are only applicable for those types of people.” I would begin by arguing that we live in an ever increasingly data driven world. Marketing strategies are being dictated less by creative geniuses like Mad Men, and more by data and statistical analysis. Business strategies, accounting functions, weather reports, sports, manufacturing, and many more industries are being transformed into multi-disciplinary endeavors through the introduction of quantitative methods. Innovation itself is typically fueled by multiple disciplines merging together. Data and quantitative analysis commonly make up one of those disciplines.
There is, however, a problem with this change. Many people are simply not well trained in these areas. I blame this in large part on the way our schools teach subjects. Our education system is the epitome of “… because that’s how we’ve always done it.” And they’ve always done it by teaching subjects independently of one another: one hour of the day devoted to writing, one hour to reading, another to math, another to hard sciences, and one hour to social sciences. The result of this paradigm is bifurcation. As adults we tend to say things like: “I’m terrible at math” or “I love math”; “I’m super artistic” or “I’m not creative at all”, and so on and so forth. Subjects we learned as children tend to polarize us as adults because of the self-fulfilling prophecy that propagates from our past performance in school. Students who understood math and science the way it was taught are usually the ones who continue on to pursue careers that require quantitative skills. Students who excelled in the arts tend to pursue those passions to some extent throughout their lives.
What if it didn’t have to be that way, though? What if instead of learning subjects independently in a way that leads to bifurcation, we learned them in a multi-disciplinarian way that makes the borders between them more ambiguous? How might that look?
Imagine a school week where instead of partitioning the days into discrete subjects, as described earlier, each day was spent on a single project that combined each of these disciplines. One such project might be to program a web page that maps Lewis and Clark’s expedition to the Pacific, complete with interactive fun facts and code documentation so others could read and understand the program. Artistic skills would be trained through drawing and designing the map; reading would be required to understand how to use the technology as well as learn about this important event in American history; writing muscles would be flexed in the coding and especially in the documentation of the program; and math skills would be developed through the graph theory needed to bring the map to life on a web page.
In this paradigm, students would be less inclined to differentiate individual subjects, and instead perceive each activity as a part of a holistic effort to solve a particular problem. The ability to do just that is becoming increasingly important in the world we are creating.
The world we are creating is cross-functional and as such, new generations will find it more challenging to succeed with a single skill set – both economically and socially. They must feel comfortable in multiple disciplines if they are to thrive. Computer science is a great paradigm for developing skill sets in multiple disciplines because it is, by its very nature, multi-disciplinary. Computer science alone is a very dry and boring subject. It is only when we combine it with another field that it becomes interesting. I love to combine it with areas of business, particularly finance. I have a classmate who is passionate about digital art and uses his dual skill set to program artwork. Another classmate is a musician and uses his understanding of graphics to pair visual effects with songs. Some computer scientists love biology and are attempting to map the human genome. Others are fascinated by machines and have used computer science to build physical robots. There are countless more partnerships that happen between computer science and other disciplines; each one being of value beyond economic impact.
I believe learning computer science can play a critical role in making us all more intellectually whole. By creating a union between disparate disciplines, computer science may be the key to helping artists find their inner mathematician, and mathematicians find their inner artist.
It is a Critical Foundation for the Future
One of the most difficult things to intuit is exponential growth. Our intuition is to understand the world linearly. Human progress, though, does not advance linearly. Human progress advances exponentially. By this, I mean that the rate at which human progress accelerates continuously increases. The implication of this in regard to technology is that by the end of this century, technology will be advancing so quickly that it will progress more in a matter of hours than it did in the whole of the 20th century.
But we need not view progress on a centennial scale to understand how quickly technology progresses. Consider instead what has happened in the last twenty years. From 1995 to 2005 we transitioned from being new to personal digital devices to everyone having computers with high speed internet and cell phones. Then from 2005 to 2015 we gained things like social media, streaming media, and smart phones. In the next decade it is expected that self-driving cars will take over our roads, Amazon’s Echo will acquire a physical, mobile body to become a true personal assistant, Hyperloops will make long distance travel faster and cheaper than ever before, and the Internet-of-Things will connect every part of our lives through digital networks.
Amidst this progress, we will face difficult decisions that will impact each of us in direct and personal ways. Public policy and legal issues regarding self-driving cars, nanobots, and a number of other new technologies will come to the fore. These topics are difficult to have open and thought-provoking discussions about because we lack a common, foundational understanding of how such things work. The current case between Apple and the FBI exemplifies this problem. Policymaker’s questioning, “Can we not find a middle ground between no encryption and strong encryption to be used for the sake of national security?” demonstrates our collective technological ignorance. Posing such a question is comparable to a young child – who has yet to fully learn about human reproduction – asking her mother why some pregnant women are more pregnant than others. The mother, of course understanding that the pregnant women are all equally pregnant but in different trimesters, may find her daughter’s question to be a humorous result of her innocent ignorance about pregnancy. But while a young child’s ignorance about pregnancy may be inconsequential, a policymaker’s ignorance about encryption is not.
Of course, technological literacy has implications beyond setting policy regarding it. People must decide how technology progresses and what goals it aims to achieve. In a technologically illiterate society, these decisions are left to the few in the know to make, with little input or intervention from the general population. Competition and informed consumers are critical to ensuring the development of new products best server our collective interest. We are lacking in the latter, and I believe it is important to correct this as technological progress will only continue at an accelerating rate going forward.
As I stated at the beginning of this article, I do not believe everyone ought to become experts in the field of computer science. However, I do believe everyone ought to learn the foundations of computer science and how technology works. Computer science teaches us how to think divergently, modularly, and algorithmically. These are cognitive abilities that translate to all walks of life. Computer science combines multiple-disciplines, which allows us to develop both our quantitative and artistic abilities. And computer science is an important foundation that will be required to make informed, wise decisions about the goals we ought to achieve with technology as well as the policy we put in place to regulate those efforts.