Blog Archives

… Shipping Packages

… Artificial Intelligence

Over the last year artificial intelligence (AI) has become nearly ubiquitous in the news. Just recently, Elon Musk called it a threat to human civilization. His warnings have been the direst, but many other people think that AI has the potential to replace billions of human jobs, and we need to adapt now to prevent mass-unemployment.

This represents a naïve view of capitalism, but one that is increasingly popular with politicians, pundits, and people who listen to them. Jobs may certainly be cut, but it is more likely that new jobs, in the traditional sense, simply will not be created. Companies will reduce labor costs across the board, leaving more profits for business owners and their remaining knowledge workers. Prices for goods and services that involve automated labor will also come down, relative to all other prices. The result will be more discretionary income, and where we choose to spend it will determine in what sectors new jobs will be created. Certainly, there will be people left behind, at least temporarily. Society may need to step in and assist those people. However, in the long-term, so long as workers have the necessary knowledge skills to manage AI, automation, and other technologies, the economy will benefit, and not be harmed, by the AI-age.

As more and more work becomes automated, there will certainly be less work to be done, in the aggregate, by humans. There is always the opportunity for new work to emerge – work that does not exist today and work that we have not conceived of yet as being possible, necessary, or important. However, this work may also be able to be automated. Some may say that there will always be work for humans to manage the automation – repairs on robots and writing code for the automation software, to begin with. I see no reason to think that this cannot be automated either.

As a result of this ubiquitous automation, there may be no jobs left for humans at all, sometime in the future. People fear that we would be left with artificial/robotic economic overlords. I also think that this is a naïve understanding of the economy. In fact, I think that the AI-age could also be a post-capitalism age. People would work less and the work left to us would be judgement based. How do we apportion the food that the robots are cultivating? Who should have the rights to exploit minerals that we can mine from the Earth (and asteroids!), since nearly everyone would have limitless abilities to produce with those metals and minerals? I doubt that we would want to automate the answers to these types of important questions. Even if we did want to, it would not be wise, because the ability to think critically would then be diminished worldwide and not be passed down to future generations. In some respects, everyone in the post-capitalism age would be one of Plato’s philosopher kings. We could also dedicate more work-time to art and creation, as well as its consumption.

AI isn’t just ubiquitous in the news anymore. It is become increasingly common in our homes and in our pockets. Chat bots and digital personal assistants and home devices like Amazon Echo’s Alexa, Siri, and Google Now are all examples of artificial intelligence. My phone is always trying to guess where I am and when I should leave for events. That’s AI in my pocket (I’ve actually been meaning to turn that off, since I don’t have a car).

As AI proliferates, so too does how we are talking about it. Along with AI, people mention machine learning, deep learning, and cognitive computing. In general, it seems to me that AI is an umbrella term that encompasses all of these techniques. In popular terms, AI refers to consumer applications where a computer is emulating activities that we would typically conduct with another person. Think of talking with Alexa as a prime example. Getting down-to-the-minute weather predictions from an app, rather than a meteorologist, is another good example. In more technical terms, AI refers to all applications in which a computer is doing what used to be restricted to the domain of a biological brain: sensing and cataloguing information, processing and analyzing it, and using that synthesized information to recognize patterns, to make predictions, and to take decisions.

Ex Machina

Machine Learning

Consider a smart watch or wrist band that records the time its user wakes up every morning for an entire month. After collecting that data for a month, it calculates an average weekday wake up time and sets an alarm automatically. For three out of five proceeding weekday mornings the user snoozed the alarm ten minutes, and on two of the mornings the user got up as soon as the alarm went off. Using this new information, the wearable revises the wakeup time to be slightly later, and thereafter continues to monitor and revise the wakeup time according to the user’s actual behavior.

This is an example of machine learning. Without any user input the machine makes inferences, assesses their veracity, and iterates accordingly. However, it’s quite rudimentary. The techniques used are fairly basic, and the result was not something that the human mind could not have arrived to on its own.

A more complex application would involve inferring where the user works based on normal daily travel patterns (unless you have turned it off, your smartphone is probably already transmitting this information), and then analyzing traffic on the roadways and the activities of other users to automatically set the user’s alarm so that they arrive at work (or school, or the gym, etc.) at their preferred time. By relying on more information for decision making the analysis techniques become more complex and begin to resemble artificial intelligence.

More important to understand than the capabilities of machine learning, is that its approach to information analysis vastly different from traditional analytical decision making. For instance, a financial institution can feed a computer vast quantities of information on borrowers and their loan performance history. A machine learning program could then process all of this information and determine what variables are best correlated with loan performance. Traditionally, a bank would apply financial and economic theory to create credit models and then test the model, altering it to find the best fit. The machine learning approach relies on a completely different paradigm. Rather than approaching the problem with a basis of assumptions, using machine learning implies ignorance, or at the very least an openness to unanticipated patterns and relationships. Machine learning tests all possible relationships and patterns and makes the best predictions, even if they go against our intuitions. Industries and practitioners that are not accustomed to this approach or unwilling to appreciate its merits may soon find themselves outpaced and outperformed by more machine-savvy competitors.

Deep Learning

Deep learning is an even more sophisticated form of machine learning. Deep learning employs a non-parametric data analysis technique called neural networks (or neural nets) to identify relationships between data. The technique is referred to as a neural net because it resembles the structure of neurons in the brain.

Here is a YouTube video that does a fairly good job of explaining the technique in a short amount of time:

https://www.youtube.com/watch?v=i6ECFrV_BVA

Simple!

Deep learning is powerful enough to accomplish advanced pattern recognition – pattern recognition which can be deployed in situations as diverse as understanding what is happening on city streets and high-speed highways (self-driving vehicles) to learning what different types of animals look like and then making drawings of them. I can imagine a deep learning application that is fed many thousands of oncological images and trains itself to identify cancer. As doctors confirm or reject the conclusion of the program it would store this information and refine its own predictions. Eventually, the program would become more accurate than doctors and radiologists.

This extreme accuracy is what has people such as Elon Musk concerned about artificial intelligence. Will we need doctors if algorithms are better at their work than the doctors themselves?

Cognitive Computing

Cognitive computing is a term I have been hearing less and less. Artificial intelligence seems to have become the preferred buzzword. However, I think that cognitive computing retains a unique definition and is useful to understand many technologies. Generally, cognitive computing are computing processes that are designed to emulate how humans process information and think. Watson is the most famous cognitive computer, and its name and its promoted abilities all seem to allude to a human mind.

One of Watson’s abilities is natural language processing. Rather than having to be fed data in a neat spreadsheet or form, Watson can consume unstructured data, make heads or tails with them, and then process the data. In business school a common assignment is creating a pro forma financial statement from a professor’s explanation of the financial conditions of a company. It’s fairly rote and mechanical. Students have to translate the explanation of the finances into a familiar form which then does the mathematical processing. Cognitive computing skips that translation step. It can understand the natural language explanation of the company’s finances and directly make the necessary computations for the pro forma.

In fact, it seems that Goldman Sachs and other investment banks are doing just that. They’ve been announcing more and more investments in AI along with reductions in the sizes of their M&A teams over the last few years. Goldman Sachs may have gone the furthest. Their CEO has declared that Goldman is really just a “tech” company, and the former Chief Information Officer is now the CFO of the company.

Machine-powered gaming is also a direct application of cognitive computing, because it pits computer cognition directly against human cognition. AI watchers were stunned early this year when a Google designed machine was able to defeat a Go master. Go is an ancient Chinese game that is strategically very complex. For those of us of the Indo-European persuasion, Go is more difficult and complex than chess.

Quantum Computing

One of the challenges with artificial intelligence is that conventional super-computers do not possess enough processing power to crunch through all the nodes in deep neural networks fast enough. Physicists and computer engineers are working on a solution known as quantum computing. Traditional computers store information in bits, which can either represent a 1 or a 0. However, using the quantum physics concept of superposition, a quantum bit, or qubit, can exist in both states at once. If engineers can create stable computers that harness qubits, computing power will exponentially increase.

Here is a good explanation of the concept and recent developments in the field.

Quantum computing would rapidly improve our abilities to create deep neural networks and accelerate the development of artificial intelligence. However, quantum computers will be so powerful, they may be able to easily crack the codes of even the most powerful computer encryption and security systems. Parallel to the development of quantum computing, society needs to invest in new cyber-security techniques that are complementary to quantum computing, not made obsolete by it.

 

I encourage everyone, no matter what job they have, what they enjoy doing, or how they interact with other people in the world, to consider, ‘how can some of the tasks that I do be automated?’ Try to imagine what it would take to automate the task and what the analytical system would be structured like. Then think about what value you can add as an individual, so that you remain necessary, despite the elimination of a human performing the task. Also consider how society needs to prepare and train its members so that the most people benefit from the advantages of AI, and the fewest people are left behind. That is likely the true message the Elon Musk is urging our policy makers to hear, and I hope that they hear it.