Blog Archives

… Creative Economy

This is NOT an article about how AI, automation, and robotics are coming for non-knowledge jobs. That is happening, but this is an article about how AI is coming for traditional knowledge economy jobs too, and how it will change our economy and society, and I think for the better!

A few days ago, I was watching NCIS with my mom (it’s always on some channel). As per usual, a clue came in, and with a few tips and taps on a computer they had traced it back to its source, cross-referenced it with a database, and sent the results to the field agents’ phones. In all, the scene lasted about 30 seconds. My mom said, “How can they do they so fast and by only typing? It takes me 20 minutes just to remember my password.”

Image result for ncis

NCIS is dramatized television. There are very few, if any, people or organizations with that level of computing sophistication and coding skill. However, it’s close enough to how we think computers work to be believable. More remarkably, we’ve at least thought we’ve been at the cusp of this level of computational sophistication for nearly 20 years. I remember watching 24 with my dad in the early 2000’s and very similar tip-tap-success was going on back then in that show. Yet we all know, by sheer fact of our daily lives, that working with digital information is cumbersome, time consuming, and does not always end in success.

Societally, we’ve convinced ourselves that we are living at the leading edge, if not the pinnacle, of the Digital Revolution. The advent of AI is just around the corner, and our 40+ years of digitization are poised to pay off into more leisure and more accurate and easy computing for all of us. On the contrary, I contend that we are merely at the beginning of the Digital Revolution, and there are still many years of work ahead of us before we can enjoy the tip-tap-success that we see on television.

Data remain very compartmentalized. Throughout the digital age, companies, governments, and other entities created databases, data protocols, and computing and data languages ad hoc. Even within large organizations different databases exist to house purportedly the same data, and sometimes these databases contradict each other. Furthermore, data are often user generated, so discrepancies propagate over time. Remember when they rolled out the electronic medical record (EMR) at your office and you could not find the field for pulse until someone told you to look for “heart rate”? And is the accounting system in dollars or thousand dollars? These are the discrepancies that real life NCIS confronts when they perform data analysis, and it takes far more time than we seem to think to sometimes get less than clear results.

A few weeks ago, I met my friend at a food hall in Midtown. I couldn’t help but look around and try to imagine what everyone did for work. Most patrons were young workers in business casual. Being Midtown, I imagined a lot of bank and finance workers, with a smattering of consultants, business people, and people in media and publishing. I know how they spend their days. I used to be a finance consultant. They spend all day pulling together data from disparate sources and collating them into something that their superiors can use to make decisions. “Where’s the data?” “Who has it?” “Is it any good?” These were my daily lines. Most work time for these “Midtowners” is spent replicating data, models, and results. Much less time is spent deciding what they make of results and numbers. I had entire projects where I figured out how the data came together and simply documented the process. Despite all of our advanced statistics and calculus classes, most people in these “Midtown” jobs are just performing basic arithmetic, if that.

Food Hall

 

However, there is reason to believe that the Digital Revolution will soon be accelerating. Emerging innovations like blockchain and internet of things (IoT) are streamlining the collection, storage, and sharing of data. The rate at which we generate data is accelerating, so having clear protocols for the sharing of data is key if we want to continue to move up the digital curve. If we continue to generate astonishing amounts of data but do nothing about their balkanization then making connections between data – the tip-tap-success we see on NCIS and 24 – will be more and more difficult, not easier, as we often assume as default for digital processes.

24Over time, as fiefdoms of data come crashing down and the Digital Revolution truly does bring us closer to tip-tap-success, all of these Midtowners in clerical and finance roles will find themselves with a lot of free time on their hands (so will the consulting firms). Banks will finally be able to cut lose the throngs of high paid workers that spend their days knee deep in Excel, jockeying numbers for the few actual managers in firms whom make decisions. Managers will be able to easily retrieve <tip> the data they need, perform some manipulations as they see fit <tap>, and then make decisions based on their results <success>.

This Digital Revolution is a necessary prerequisite for the full advent of AI. Data are the fuel for AI. Machine learning algorithms require vast quantities of data, and preferably data that update in real time, so the algorithms can truly learn and improve upon themselves. As it stands now, all of the world’s data are too balkanized for machine learning algorithms to pull them in and turn them into the true putty that will lead to cognition-level algorithms. However, when it does, it’s not only the Midtowners that need to worry about their jobs. Managers – true actually make decisions of import managers, will begin to see their judgement challenged by algorithms. When there is little uncertainty in what has transpired in the past and what the forecast prognosticates for the future, there is little room for what we now think of as managerial judgement in decision making.

When I discuss this future with business people they see it as a hard pill to swallow. This is a natural response, but I’m apt to point out that there are excellent companies that are already working on AI for managerial decision making. As consumers, we are most familiar with Alexa or Google Home as voice-enabled personal digital assistants. However, Salesforce has Einstein, which helps sales and marketing teams with routine tasks. They’re already working on more advanced business applications for Einstein, and before long you’ll be able to ask Einstein, “Should we acquire a company or build a new capability in-house?” We are taught analytical frameworks to solve these questions in business school, so once we have the requisite data packaged into something that a machine can consume, why couldn’t, and why shouldn’t the machine answer the question for us (or even alert us to what questions we ought to be asking)?

[Salesforce CEO] Benioff even told analysts on a quarterly earnings call that he uses Einstein at weekly executive meetings to forecast results and settle arguments: “I will literally turn to Einstein in the meeting and say, ‘OK, Einstein, you’ve heard all of this, now what do you think?’ And Einstein will give me the over and under on the quarter and show me where we’re strong and where we’re weak, and sometimes it will point out a specific executive, which it has done in the last three quarters, and say that this executive is somebody who needs specific attention.”

 – Wired

I am not saying that the clerical workers of today are overpaid data jockeys not worth their weight in avocado toast. Nor am I criticizing their managers for hiring them and needing their assistance. I recall a particularly large project from my old consulting firm. It required an army of fresh-out-of-college consultants to comb through loan files and flag missing documents and other discrepancies. The work was tedious, but it required attention, occasional analytics, and downright intelligence. The young consultants did not find it particularly rewarding, only repetitive, and the bank certainly did not want to be paying the millions of dollars for error identification. Nevertheless, we still live at the dawn of the Digital Revolution, and this work was a necessary evil for everyone involved in the project. With AI far more popularized now than it was only ten years ago, nearly everyone can see the promise of AI in automating audit work like that. Nevertheless, it is still not a reality.

Salesforce EinsteinWhen the Digital Revolution does usher in true machine-powered cognition, I foresee banks, investment houses, insurance companies, and trading businesses, just to start, operating drastically differently from what we are used to today. Midtown will be cleared out – both the food halls and the corner suites. A few managers will rely on AI for most decision making, and the remaining workers will be more creative in nature, delving into new and emerging business models, or possibly still toiling in the age-old task of sales (with Einstein’s help, of course).

I hardly see this as apocalyptic for our knowledge economy. Yes, Midtown will be desolate, but Brooklyn will be bustling. Suit peddlers will be out of business, but hipster boutiques will be teeming. The advent of AI will be the advent of what I call the Creative Economy. Today, the creative economy is the corner of our economy focused on arts and leisure, design, media and entertainment, performing arts, fashion, and a smattering of other cottage industries.

Although people will lose jobs (or fewer new jobs will be created in the knowledge sector), our economy will be operating more efficiently. This relieves pressure on prices and leaves employed people with more disposable income. As an economy we can then deploy this disposable income into new interests, hobbies, passions, and arts. With more of the world’s most intelligent people free to devote themselves to their passions and leisure, there will be an explosion in creativity and creativity-as-commerce. Rather than focusing on high paying jobs devoid of meaning (if anyone who spends their whole day collating data says they “love their job,” they are lying), far more of our collective intellect can be dedicated to creative pursuits. We can create more content of an intellectual nature and consume more downright leisure.

I see four creative sectors staking claims for themselves and growing rapidly alongside AI:

  • Pure Leisure & Arts
  • Digital Arts
  • Creative Enablement
  • Physical-Digital Interaction

Pure Leisure & Arts

We already consider leisure & arts as very virtuous pursuits, although one that relegates all but the luckiest artists among us to being the perennial starving artist. These arts include writing, painting and drawing, film-making and acting, music, dance, fashion, gastronomy, architecture, other forms of literature, performing arts, and visual arts. With more time left to pursue the consumption of leisure or the practice of these arts, the traditional arts will proliferate.

Digital Arts

Digital arts will be one of the fastest growing new creative pursuits. With more immersion in the form of augmented reality and virtual reality (AR/VR), there will be immense demand for graphic design, 3D design, animation, and VR environment design.

Creative Enablement

 

3dp_fusion360_autodesk_logo

All of this art begs for software in which it can be designed, rendered, mixed, shared, and experienced. Today we have a knowledge economy, and Microsoft, along with the likes of SAP and Oracle, dominate knowledge software, so they are some of the largest companies in the world. In the future, the creative economy software manufacturers will be among the largest companies in the world. Design software by companies like Autodesk and Adobe will dominate our daily lives, and those companies will be vaulted into the Dow 30. There are even companies that are merging many technologies, from teleconferencing and virtual reality to design and architecture. They are creating software that will allow remote teams to interact in virtual reality environment and collaborate on design and creation real-time. Imagine a team of architects, all over the world, being able to virtually fly around the buildings they are designing and make changes together based on each other’s comments.

Physical-Digital Interaction

We will continue to live in a physical world (I am not predicting The Matrix), and manufacturing, engineering, medicine, and other physical sciences and fields will continue to be of the utmost importance. While less creative in nature, companies that bridge the physical-digital divide and allow AI and automation to assist in these fields will be extremely valuable. Importantly, they will continue to fuel the creative economy by freeing workers from tasks that can be performed by computer and machine, allowing them to more freely contribute to the creative economy.

img_20180430_181950

3D Printer in action

The shift from a knowledge economy to a creative economy will have to be supported by the educational system. Training for trade and business will diminish. Instead, there will be more learning how to learn. Liberal arts will flourish, alongside an emphasis on mathematics and statistics, engineering, biology and medicine, and hard sciences. Coding, which is already moving to the mainstream of education, will gain even more importance, and humanities and the arts will once again be respected and valued fields of study. Education will also be prolongated and emphasized throughout one’s life, not just at its beginning, and there will be more economic emphasis on education. The creative economy will also self-reinforce the education sector by more effectively immersing learners in their education and create new and innovative ways to learn. If we can align our education system with the promises of the future and coordinate our data protocols for our collective well-being, the future will be bright, colorful, and fun and filled with enjoyable work and pleasurable leisurely pursuits.

 

… Artificial Intelligence

Over the last year artificial intelligence (AI) has become nearly ubiquitous in the news. Just recently, Elon Musk called it a threat to human civilization. His warnings have been the direst, but many other people think that AI has the potential to replace billions of human jobs, and we need to adapt now to prevent mass-unemployment.

This represents a naïve view of capitalism, but one that is increasingly popular with politicians, pundits, and people who listen to them. Jobs may certainly be cut, but it is more likely that new jobs, in the traditional sense, simply will not be created. Companies will reduce labor costs across the board, leaving more profits for business owners and their remaining knowledge workers. Prices for goods and services that involve automated labor will also come down, relative to all other prices. The result will be more discretionary income, and where we choose to spend it will determine in what sectors new jobs will be created. Certainly, there will be people left behind, at least temporarily. Society may need to step in and assist those people. However, in the long-term, so long as workers have the necessary knowledge skills to manage AI, automation, and other technologies, the economy will benefit, and not be harmed, by the AI-age.

As more and more work becomes automated, there will certainly be less work to be done, in the aggregate, by humans. There is always the opportunity for new work to emerge – work that does not exist today and work that we have not conceived of yet as being possible, necessary, or important. However, this work may also be able to be automated. Some may say that there will always be work for humans to manage the automation – repairs on robots and writing code for the automation software, to begin with. I see no reason to think that this cannot be automated either.

As a result of this ubiquitous automation, there may be no jobs left for humans at all, sometime in the future. People fear that we would be left with artificial/robotic economic overlords. I also think that this is a naïve understanding of the economy. In fact, I think that the AI-age could also be a post-capitalism age. People would work less and the work left to us would be judgement based. How do we apportion the food that the robots are cultivating? Who should have the rights to exploit minerals that we can mine from the Earth (and asteroids!), since nearly everyone would have limitless abilities to produce with those metals and minerals? I doubt that we would want to automate the answers to these types of important questions. Even if we did want to, it would not be wise, because the ability to think critically would then be diminished worldwide and not be passed down to future generations. In some respects, everyone in the post-capitalism age would be one of Plato’s philosopher kings. We could also dedicate more work-time to art and creation, as well as its consumption.

AI isn’t just ubiquitous in the news anymore. It is become increasingly common in our homes and in our pockets. Chat bots and digital personal assistants and home devices like Amazon Echo’s Alexa, Siri, and Google Now are all examples of artificial intelligence. My phone is always trying to guess where I am and when I should leave for events. That’s AI in my pocket (I’ve actually been meaning to turn that off, since I don’t have a car).

As AI proliferates, so too does how we are talking about it. Along with AI, people mention machine learning, deep learning, and cognitive computing. In general, it seems to me that AI is an umbrella term that encompasses all of these techniques. In popular terms, AI refers to consumer applications where a computer is emulating activities that we would typically conduct with another person. Think of talking with Alexa as a prime example. Getting down-to-the-minute weather predictions from an app, rather than a meteorologist, is another good example. In more technical terms, AI refers to all applications in which a computer is doing what used to be restricted to the domain of a biological brain: sensing and cataloguing information, processing and analyzing it, and using that synthesized information to recognize patterns, to make predictions, and to take decisions.

Ex Machina

Machine Learning

Consider a smart watch or wrist band that records the time its user wakes up every morning for an entire month. After collecting that data for a month, it calculates an average weekday wake up time and sets an alarm automatically. For three out of five proceeding weekday mornings the user snoozed the alarm ten minutes, and on two of the mornings the user got up as soon as the alarm went off. Using this new information, the wearable revises the wakeup time to be slightly later, and thereafter continues to monitor and revise the wakeup time according to the user’s actual behavior.

This is an example of machine learning. Without any user input the machine makes inferences, assesses their veracity, and iterates accordingly. However, it’s quite rudimentary. The techniques used are fairly basic, and the result was not something that the human mind could not have arrived to on its own.

A more complex application would involve inferring where the user works based on normal daily travel patterns (unless you have turned it off, your smartphone is probably already transmitting this information), and then analyzing traffic on the roadways and the activities of other users to automatically set the user’s alarm so that they arrive at work (or school, or the gym, etc.) at their preferred time. By relying on more information for decision making the analysis techniques become more complex and begin to resemble artificial intelligence.

More important to understand than the capabilities of machine learning, is that its approach to information analysis vastly different from traditional analytical decision making. For instance, a financial institution can feed a computer vast quantities of information on borrowers and their loan performance history. A machine learning program could then process all of this information and determine what variables are best correlated with loan performance. Traditionally, a bank would apply financial and economic theory to create credit models and then test the model, altering it to find the best fit. The machine learning approach relies on a completely different paradigm. Rather than approaching the problem with a basis of assumptions, using machine learning implies ignorance, or at the very least an openness to unanticipated patterns and relationships. Machine learning tests all possible relationships and patterns and makes the best predictions, even if they go against our intuitions. Industries and practitioners that are not accustomed to this approach or unwilling to appreciate its merits may soon find themselves outpaced and outperformed by more machine-savvy competitors.

Deep Learning

Deep learning is an even more sophisticated form of machine learning. Deep learning employs a non-parametric data analysis technique called neural networks (or neural nets) to identify relationships between data. The technique is referred to as a neural net because it resembles the structure of neurons in the brain.

Here is a YouTube video that does a fairly good job of explaining the technique in a short amount of time:

https://www.youtube.com/watch?v=i6ECFrV_BVA

Simple!

Deep learning is powerful enough to accomplish advanced pattern recognition – pattern recognition which can be deployed in situations as diverse as understanding what is happening on city streets and high-speed highways (self-driving vehicles) to learning what different types of animals look like and then making drawings of them. I can imagine a deep learning application that is fed many thousands of oncological images and trains itself to identify cancer. As doctors confirm or reject the conclusion of the program it would store this information and refine its own predictions. Eventually, the program would become more accurate than doctors and radiologists.

This extreme accuracy is what has people such as Elon Musk concerned about artificial intelligence. Will we need doctors if algorithms are better at their work than the doctors themselves?

Cognitive Computing

Cognitive computing is a term I have been hearing less and less. Artificial intelligence seems to have become the preferred buzzword. However, I think that cognitive computing retains a unique definition and is useful to understand many technologies. Generally, cognitive computing are computing processes that are designed to emulate how humans process information and think. Watson is the most famous cognitive computer, and its name and its promoted abilities all seem to allude to a human mind.

One of Watson’s abilities is natural language processing. Rather than having to be fed data in a neat spreadsheet or form, Watson can consume unstructured data, make heads or tails with them, and then process the data. In business school a common assignment is creating a pro forma financial statement from a professor’s explanation of the financial conditions of a company. It’s fairly rote and mechanical. Students have to translate the explanation of the finances into a familiar form which then does the mathematical processing. Cognitive computing skips that translation step. It can understand the natural language explanation of the company’s finances and directly make the necessary computations for the pro forma.

In fact, it seems that Goldman Sachs and other investment banks are doing just that. They’ve been announcing more and more investments in AI along with reductions in the sizes of their M&A teams over the last few years. Goldman Sachs may have gone the furthest. Their CEO has declared that Goldman is really just a “tech” company, and the former Chief Information Officer is now the CFO of the company.

Machine-powered gaming is also a direct application of cognitive computing, because it pits computer cognition directly against human cognition. AI watchers were stunned early this year when a Google designed machine was able to defeat a Go master. Go is an ancient Chinese game that is strategically very complex. For those of us of the Indo-European persuasion, Go is more difficult and complex than chess.

Quantum Computing

One of the challenges with artificial intelligence is that conventional super-computers do not possess enough processing power to crunch through all the nodes in deep neural networks fast enough. Physicists and computer engineers are working on a solution known as quantum computing. Traditional computers store information in bits, which can either represent a 1 or a 0. However, using the quantum physics concept of superposition, a quantum bit, or qubit, can exist in both states at once. If engineers can create stable computers that harness qubits, computing power will exponentially increase.

Here is a good explanation of the concept and recent developments in the field.

Quantum computing would rapidly improve our abilities to create deep neural networks and accelerate the development of artificial intelligence. However, quantum computers will be so powerful, they may be able to easily crack the codes of even the most powerful computer encryption and security systems. Parallel to the development of quantum computing, society needs to invest in new cyber-security techniques that are complementary to quantum computing, not made obsolete by it.

 

I encourage everyone, no matter what job they have, what they enjoy doing, or how they interact with other people in the world, to consider, ‘how can some of the tasks that I do be automated?’ Try to imagine what it would take to automate the task and what the analytical system would be structured like. Then think about what value you can add as an individual, so that you remain necessary, despite the elimination of a human performing the task. Also consider how society needs to prepare and train its members so that the most people benefit from the advantages of AI, and the fewest people are left behind. That is likely the true message the Elon Musk is urging our policy makers to hear, and I hope that they hear it.

%d bloggers like this: