Category Archives: technology

… Creative Economy

This is NOT an article about how AI, automation, and robotics are coming for non-knowledge jobs. That is happening, but this is an article about how AI is coming for traditional knowledge economy jobs too, and how it will change our economy and society, and I think for the better!

A few days ago, I was watching NCIS with my mom (it’s always on some channel). As per usual, a clue came in, and with a few tips and taps on a computer they had traced it back to its source, cross-referenced it with a database, and sent the results to the field agents’ phones. In all, the scene lasted about 30 seconds. My mom said, “How can they do they so fast and by only typing? It takes me 20 minutes just to remember my password.”

Image result for ncis

NCIS is dramatized television. There are very few, if any, people or organizations with that level of computing sophistication and coding skill. However, it’s close enough to how we think computers work to be believable. More remarkably, we’ve at least thought we’ve been at the cusp of this level of computational sophistication for nearly 20 years. I remember watching 24 with my dad in the early 2000’s and very similar tip-tap-success was going on back then in that show. Yet we all know, by sheer fact of our daily lives, that working with digital information is cumbersome, time consuming, and does not always end in success.

Societally, we’ve convinced ourselves that we are living at the leading edge, if not the pinnacle, of the Digital Revolution. The advent of AI is just around the corner, and our 40+ years of digitization are poised to pay off into more leisure and more accurate and easy computing for all of us. On the contrary, I contend that we are merely at the beginning of the Digital Revolution, and there are still many years of work ahead of us before we can enjoy the tip-tap-success that we see on television.

Data remain very compartmentalized. Throughout the digital age, companies, governments, and other entities created databases, data protocols, and computing and data languages ad hoc. Even within large organizations different databases exist to house purportedly the same data, and sometimes these databases contradict each other. Furthermore, data are often user generated, so discrepancies propagate over time. Remember when they rolled out the electronic medical record (EMR) at your office and you could not find the field for pulse until someone told you to look for “heart rate”? And is the accounting system in dollars or thousand dollars? These are the discrepancies that real life NCIS confronts when they perform data analysis, and it takes far more time than we seem to think to sometimes get less than clear results.

A few weeks ago, I met my friend at a food hall in Midtown. I couldn’t help but look around and try to imagine what everyone did for work. Most patrons were young workers in business casual. Being Midtown, I imagined a lot of bank and finance workers, with a smattering of consultants, business people, and people in media and publishing. I know how they spend their days. I used to be a finance consultant. They spend all day pulling together data from disparate sources and collating them into something that their superiors can use to make decisions. “Where’s the data?” “Who has it?” “Is it any good?” These were my daily lines. Most work time for these “Midtowners” is spent replicating data, models, and results. Much less time is spent deciding what they make of results and numbers. I had entire projects where I figured out how the data came together and simply documented the process. Despite all of our advanced statistics and calculus classes, most people in these “Midtown” jobs are just performing basic arithmetic, if that.

Food Hall

 

However, there is reason to believe that the Digital Revolution will soon be accelerating. Emerging innovations like blockchain and internet of things (IoT) are streamlining the collection, storage, and sharing of data. The rate at which we generate data is accelerating, so having clear protocols for the sharing of data is key if we want to continue to move up the digital curve. If we continue to generate astonishing amounts of data but do nothing about their balkanization then making connections between data – the tip-tap-success we see on NCIS and 24 – will be more and more difficult, not easier, as we often assume as default for digital processes.

24Over time, as fiefdoms of data come crashing down and the Digital Revolution truly does bring us closer to tip-tap-success, all of these Midtowners in clerical and finance roles will find themselves with a lot of free time on their hands (so will the consulting firms). Banks will finally be able to cut lose the throngs of high paid workers that spend their days knee deep in Excel, jockeying numbers for the few actual managers in firms whom make decisions. Managers will be able to easily retrieve <tip> the data they need, perform some manipulations as they see fit <tap>, and then make decisions based on their results <success>.

This Digital Revolution is a necessary prerequisite for the full advent of AI. Data are the fuel for AI. Machine learning algorithms require vast quantities of data, and preferably data that update in real time, so the algorithms can truly learn and improve upon themselves. As it stands now, all of the world’s data are too balkanized for machine learning algorithms to pull them in and turn them into the true putty that will lead to cognition-level algorithms. However, when it does, it’s not only the Midtowners that need to worry about their jobs. Managers – true actually make decisions of import managers, will begin to see their judgement challenged by algorithms. When there is little uncertainty in what has transpired in the past and what the forecast prognosticates for the future, there is little room for what we now think of as managerial judgement in decision making.

When I discuss this future with business people they see it as a hard pill to swallow. This is a natural response, but I’m apt to point out that there are excellent companies that are already working on AI for managerial decision making. As consumers, we are most familiar with Alexa or Google Home as voice-enabled personal digital assistants. However, Salesforce has Einstein, which helps sales and marketing teams with routine tasks. They’re already working on more advanced business applications for Einstein, and before long you’ll be able to ask Einstein, “Should we acquire a company or build a new capability in-house?” We are taught analytical frameworks to solve these questions in business school, so once we have the requisite data packaged into something that a machine can consume, why couldn’t, and why shouldn’t the machine answer the question for us (or even alert us to what questions we ought to be asking)?

[Salesforce CEO] Benioff even told analysts on a quarterly earnings call that he uses Einstein at weekly executive meetings to forecast results and settle arguments: “I will literally turn to Einstein in the meeting and say, ‘OK, Einstein, you’ve heard all of this, now what do you think?’ And Einstein will give me the over and under on the quarter and show me where we’re strong and where we’re weak, and sometimes it will point out a specific executive, which it has done in the last three quarters, and say that this executive is somebody who needs specific attention.”

 – Wired

I am not saying that the clerical workers of today are overpaid data jockeys not worth their weight in avocado toast. Nor am I criticizing their managers for hiring them and needing their assistance. I recall a particularly large project from my old consulting firm. It required an army of fresh-out-of-college consultants to comb through loan files and flag missing documents and other discrepancies. The work was tedious, but it required attention, occasional analytics, and downright intelligence. The young consultants did not find it particularly rewarding, only repetitive, and the bank certainly did not want to be paying the millions of dollars for error identification. Nevertheless, we still live at the dawn of the Digital Revolution, and this work was a necessary evil for everyone involved in the project. With AI far more popularized now than it was only ten years ago, nearly everyone can see the promise of AI in automating audit work like that. Nevertheless, it is still not a reality.

Salesforce EinsteinWhen the Digital Revolution does usher in true machine-powered cognition, I foresee banks, investment houses, insurance companies, and trading businesses, just to start, operating drastically differently from what we are used to today. Midtown will be cleared out – both the food halls and the corner suites. A few managers will rely on AI for most decision making, and the remaining workers will be more creative in nature, delving into new and emerging business models, or possibly still toiling in the age-old task of sales (with Einstein’s help, of course).

I hardly see this as apocalyptic for our knowledge economy. Yes, Midtown will be desolate, but Brooklyn will be bustling. Suit peddlers will be out of business, but hipster boutiques will be teeming. The advent of AI will be the advent of what I call the Creative Economy. Today, the creative economy is the corner of our economy focused on arts and leisure, design, media and entertainment, performing arts, fashion, and a smattering of other cottage industries.

Although people will lose jobs (or fewer new jobs will be created in the knowledge sector), our economy will be operating more efficiently. This relieves pressure on prices and leaves employed people with more disposable income. As an economy we can then deploy this disposable income into new interests, hobbies, passions, and arts. With more of the world’s most intelligent people free to devote themselves to their passions and leisure, there will be an explosion in creativity and creativity-as-commerce. Rather than focusing on high paying jobs devoid of meaning (if anyone who spends their whole day collating data says they “love their job,” they are lying), far more of our collective intellect can be dedicated to creative pursuits. We can create more content of an intellectual nature and consume more downright leisure.

I see four creative sectors staking claims for themselves and growing rapidly alongside AI:

  • Pure Leisure & Arts
  • Digital Arts
  • Creative Enablement
  • Physical-Digital Interaction

Pure Leisure & Arts

We already consider leisure & arts as very virtuous pursuits, although one that relegates all but the luckiest artists among us to being the perennial starving artist. These arts include writing, painting and drawing, film-making and acting, music, dance, fashion, gastronomy, architecture, other forms of literature, performing arts, and visual arts. With more time left to pursue the consumption of leisure or the practice of these arts, the traditional arts will proliferate.

Digital Arts

Digital arts will be one of the fastest growing new creative pursuits. With more immersion in the form of augmented reality and virtual reality (AR/VR), there will be immense demand for graphic design, 3D design, animation, and VR environment design.

Creative Enablement

 

3dp_fusion360_autodesk_logo

All of this art begs for software in which it can be designed, rendered, mixed, shared, and experienced. Today we have a knowledge economy, and Microsoft, along with the likes of SAP and Oracle, dominate knowledge software, so they are some of the largest companies in the world. In the future, the creative economy software manufacturers will be among the largest companies in the world. Design software by companies like Autodesk and Adobe will dominate our daily lives, and those companies will be vaulted into the Dow 30. There are even companies that are merging many technologies, from teleconferencing and virtual reality to design and architecture. They are creating software that will allow remote teams to interact in virtual reality environment and collaborate on design and creation real-time. Imagine a team of architects, all over the world, being able to virtually fly around the buildings they are designing and make changes together based on each other’s comments.

Physical-Digital Interaction

We will continue to live in a physical world (I am not predicting The Matrix), and manufacturing, engineering, medicine, and other physical sciences and fields will continue to be of the utmost importance. While less creative in nature, companies that bridge the physical-digital divide and allow AI and automation to assist in these fields will be extremely valuable. Importantly, they will continue to fuel the creative economy by freeing workers from tasks that can be performed by computer and machine, allowing them to more freely contribute to the creative economy.

img_20180430_181950

3D Printer in action

The shift from a knowledge economy to a creative economy will have to be supported by the educational system. Training for trade and business will diminish. Instead, there will be more learning how to learn. Liberal arts will flourish, alongside an emphasis on mathematics and statistics, engineering, biology and medicine, and hard sciences. Coding, which is already moving to the mainstream of education, will gain even more importance, and humanities and the arts will once again be respected and valued fields of study. Education will also be prolongated and emphasized throughout one’s life, not just at its beginning, and there will be more economic emphasis on education. The creative economy will also self-reinforce the education sector by more effectively immersing learners in their education and create new and innovative ways to learn. If we can align our education system with the promises of the future and coordinate our data protocols for our collective well-being, the future will be bright, colorful, and fun and filled with enjoyable work and pleasurable leisurely pursuits.

 

Advertisements

… Immersive Visualization

This article is part of a series, The Seven Innovations That Will Change the World.

We are currently experiencing an interesting divergence. On one hand, our interactions with the digital world are intermediated less and less by physical screens – the televisions, computer monitors, smart phones, and tablets that we traditional associate with the digital. These days, more and more, we can simply talk to our devices in order to interact with them. Amazon Echo and its disembodied persona, Alexa, is a good example, but I’m equally astounded by my new noise-cancelling headphones (my mom bullied me into buying them after a recent bad experience on a flight had me praying for an Exorcist to be on the plane). I can control a lot of my phone’s functionality by speaking to the headphones or even just touching them. Sonos, the wireless speaker company, even has a name for this concept: sonic internet or sonic culture.

On the other hand, our consumption of media and entertainment, particularly television and film, is increasing, and these media are necessarily consumed through glass. American adults watched, on average, 4 hours and 46 minutes of television (that’s up 21 minutes from six months ago) and 25 minutes of video on phones, tablets, and computers (Wired, Nielsen). These figures do not include movies, streaming services like Netflix, or other non-conventional sources of media (like watching news or entertainment videos over social media sites).

What’s curious about this divergence is that even though our digital lives are being disintermediated from the screen, the media and entertainment that we consume is becoming more and more digital! Streaming services are becoming more popular while many households are cutting the chord and not bothering to pay for their age-old analog television service anymore. Quality and quantity of programming has increased, and we have more choice over how we acquire this content. Many people are finding that they do not have to be beholden to traditional television service providers in order to see great programming.

The digitization of entertainment goes beyond streaming services, Chromecast dongles, and Rokus. Gaming, which we typically associate with adolescent boys in their basements, is becoming a more pervasive form of entertainment. Even more so, e-sports have made gaming a spectator experience, with the associated apps and streaming services to watch them online. Game streaming generated $4.6 billion in revenue in 2017 (Motley Fool). Amazon’s service, Twitch, dominates the market with 665 million viewers, which are more than HBO, Netflix, and ESPN, combined. In 2014, Twitch was the fourth largest source of internet traffic, behind Netflix, Google, and Apple (Wall Street Journal). However, Google, Twitter, Microsoft, and Facebook are all nipping at Amazon’s heels and offering their own services to gamers and e-sports viewers. Facebook sells the virtual reality (VR) headset, Oculus, and hopes that 360° e-sports streaming will give it an edge over the competition and promote sales for Oculus (Wired).

eSports-Is-a-Spectator-Sport-42-of-eSports-Viewers-Don_t-Play-the-Game-They-Watch

E-Sports

Gaming itself has come a long way from the era of the console wars between Nintendo, Sony, and later Microsoft. Gamers can now play on computers, smartphones, and tablets. The most popular game in the world right now, Fortnite Battle Royale, with more than 125 million players since it was released in 2017, is available on PlayStation 4, Xbox One, Windows, Macs, Nintendo Switch, as well as iOS and Android. However, more and more games, usually on computers or smartphones, are immersing players themselves in the gameplay. Pokémon Go is a prime example, where the setting of gameplay is the users’ surroundings. Minecraft, another game of massive popularity, while not an augmented reality (AR) game like Pokémon Go, also has a mind-body connective therapeutic quality that few other games have been able to replicate.

As artificial intelligence, voice recognition, and haptic and gestures technologies advance, technology product managers will bundle these innovations into more and more useful applications or “skills” for Alexa. The glass in our homes and in our pockets will become less and less cluttered by the minutiae of our digital lives. Not so long-ago Facebook released a digital assistant known as “M,” but they folded it shortly afterwards. It seems that they are gearing up for a second act called Aloha (I had previously recommended the name “Geoffrey”) and a home video calling device called Portal to accompany Aloha (Techcrunch). Alexa is a window into our digital retail lives and often gives us access to music as well. Google Assistant allows us to control various appliances in our homes. Now our social lives will be voice activated as well. Einstein, from Salesforce, is a little mentioned, but powerful AI assistant for businesses. Sonos is working on devices that will integrate all of these voice-enabled digital assistants into one seamless user experience. With our digital lives conveniently more and more sonic, as Sonos would say, there will be less encumbering us from consuming entertainment on our leftover screens, whether its television shows and movies, live news, sports, or e-sports, games that we are playing, or even games that we are directly immersed in.

Rather than remaining passive consumers of entertainment, we are going to increasingly interact with our media. Pokémon Go was pivotal in launching this new genre of immersive entertainment to the masses and making AR go mainstream, but there are many other examples of immersion, interaction, and play appealing to people. One of the most important developments in the success of the NFL was the growth of fantasy football. By being able to play the game in a new way and interact with the league, engagement and viewership increased. Webkinz was an early example of children’s toys being mixed with the digital, by giving stuffed animals interactive lives online. Lego has dived head first into this trend, and is developing powerful AR apps that allow users to bring their Legoscapes, creations, and characters to life on a smartphone or tablet.

This trend, or more broadly, these innovations, I encompass under the term immersive visualization. Immersive visualization uses visual technologies to allow viewers to more closely see, interact with, and participate in the viewing experience. AR is only one technology bundled into this innovation. The defining characteristic of immersive visualization is dynamic interaction. VR is also an immersive technology, as is heads-up display (HUD), holograms, and I would even argue that alternative reality games (ARG) and gamification are immersive (although they do not necessarily have to be digital).

Beyond the purely entertaining, there is a growing body of research that shows that turning any task or activity into a game makes it more captivating and better for learning and knowledge acquisition and retention. This trend is called gamification, and the resulting games or quests are known as serious games. Companies like Lego are using gamification to drive engagement and sell more product. However, the education industry is developing quest-based learning based on the principles of serious games to improve learning outcomes. City managers and planners are seeing how they can incorporate whimsical games into the mundane details of cities to improve such important factors such as safety and traffic flow. The next time you see some interesting riddles or music making instruments protruding from your local bus stop, consider that it may have been put there to make it seem like the time you spend waiting for the bus is shorter, so you are more likely to ride the bus more often.

While AR is the superimposition of the digital over the physical world, virtual reality (VR) represents an immersion into a completely digital world. Via a headset, users can only see what is projected into the headset. These are often 3D environments, and the headsets relative position and orientation determines what parts of the environment are seen or not seen. By blocking out the actual physical world, VR has a powerful psychological effect on people. People begin to think that they are actually inside the VR environ and begin to act accordingly. Just like in the real world, people are apprehensive to jump out of VR airplanes and skydive because their minds begin to think that they are actually in an airplane and thus jumping implies mortal peril. VR has been known to magnify emotional responses far more than traditional cinema. Imagine being able to walk through and around the pivotal scene of a movie rather than watching it through a screen.

This psychological response makes VR another powerful tool for education, learning, and development, as well as other fields, such as therapy. The entertainment and media industry, travel industry, health care, and art and design industries are all developing applications for VR. Retailers are also clamoring for AR/VR applications to their businesses. There are now AR apps to visualize what furniture will look like if you purchased it for your home, how you would look in new apparel, or what cosmetics will look like when applied to your face. When shopping for a new home you can now skip the open houses and just take VR tours of the houses. The American Academy of Optometry is devoting an entire special edition of its Optometry & Vision Science journal to “Assistive Technology in Vision Impairment,” with a special emphasis on immersive visualization technologies. Face swaps and puppy dog faces were only the beginning of the AR revolution!

Barbie has recently seen a resurgence in popularity by pairing the doll with a video blog that is generated through motion-capture acting (Vice News). Netflix is beginning to dip its toes into immersion by developing interactive television shows where viewers will make choices that effect the plot progress and ending of the shows. Netflix is also releasing a show based on Minecraft and has plans to integrate more programming adapted from gaming into its platform (Bloomberg).

Immersion may also be a lifesaver for commercial real estate developers that have over-invested in retail space. Before Sears declared bankruptcy, the mall vacancy rate was at 8.6% (Reis, WSJ). VR arcades and other immersive experience centers are popping-up around the United States, and new companies such as Two Bit Circus and Meow Wolf are designing experience centers that perfectly fit within the floorplan of vacant department stores. They are anchoring more and more shopping centers that have lost department stores or other attractive retailers. Given the recent popularity of escape rooms and pop-up experiences designed for Instagrammers, these new immersive experience centers certainly have legs to run on. Greenlight Insights, a VR research firm, reports that immersive entertainment will be an $8 billion business by 2023 (Wired).

One of my favorite memories with my friends Luke and Catherine is when we were staying at a house on the Tamar River in northern Tasmania. One night we walked out to the riverbank and used Google Sky to explore the southern heavens. As natives of the Northern Hemisphere, the app was superimposing over the night sky stars we had never heard of and drawing constellations that we had never seen before. I had been gifted so many Aussie and Kiwi trinkets with the Southern Cross and seen so many friends with the constellation tattooed somewhere on their bodies, but never before had I seen those four stars with my own eyes until Google Sky showed me where to gaze.

a-wild

This looming trend towards the immersive and visual is digitally driven, so computer engineers and software developers will continue to be in greater and greater demand. However, these technologies are visual and artistic. Someone at Google has to draw those constellations. Those Pokémon do not render themselves. If you want to see what buildings and neighborhoods used to look like by just walking around with your phone, someone is going to need to meticulously create the 3D models that materialize on your screen. Graphic designers, animators, artists, cartographers, photographers, architects, optical experts, mathematicians, and their ilk will be the indispensable partners of the coders. Facebook has developed a 3D and 360-degree camera that resembles a device out of Star Wars. The device must have taken thousands of hours of work from cinematographers, digital photography engineers, designers, and software engineers. I believe that we are on the cusp of the largest hiring spree of these “creative types” that for so long we have relegated to freelance work and starving artist status.

From the Bureau of Labor Statistics:

Employment of arts and design occupations is projected to grow 4 percent from 2016 to 2026, slower than the average for all occupations, adding about 33,700 new jobs. More workers will be needed to meet the growing demand for animation and visual effects in video games, movies, television, and on smartphones, as well as to help create visually appealing and effective layouts of websites and other media platforms. Other arts and design workers are employed in industries that are projected to decline, however, including publishing, manufacturing, and floral shops.

The median annual wage for arts and design occupations was $45,250 in May 2017, which was higher than the median annual wage for all occupations of $37,690.

The BLS is greatly underestimating the importance that these occupations will play in the future of our economy. Immersion is more enthralling. Immersion is more effective. Immersion, when implemented properly, can be more profitable. Consumers, as they become more acquainted with immersive experiences, will demand more and more of them. Media, entertainment, and more of our visual world will continue to march towards the immersive and interactive, just as digital is becoming more sonic.

Seven Innovations That Will Change The World

At business school I took a course called Technology Strategy. In the course, our professor, Dr. Al Segars, introduced us to a list he published: Seven Technologies Remaking the World (Sloan Management Review).

  1. Pervasive Computing
  2. Wireless Mesh Networks
  3. Biotechnology
  4. 3D Printing
  5. Cloud/Big Data
  6. Nanotechnology
  7. Next Generation Robotics

We went through the list in detail during class, and while he had compelling arguments for the formulation of his list, I felt that it was too narrowly focused on physical technologies and misses some of the most important trends in humanity, like climate change. After all, internal combustion engines and refrigerators, although not new technologies, are changing the world in a big way, and not necessarily for the better.

After some thought, I’ve come up with a competing list of seven innovations that will change the world. There are a few differences between my list and my professor’s. First, I refer to innovations rather than technologies. I define technologies a bit more narrowly. If not tangible, they at least have some physical manifestations. Innovations, on the other hand, can be conceptual in nature. Secondly, these are innovations that will change the world, rather than those that are currently having an immediate impact.

My list is far from perfect. Since these are innovations that will change the world there is a fair degree of speculation in the formulation of my list. The list also betrays my biases as a business student. Several of the ideas lean towards business innovations, and my lack of engineering and life sciences knowledge steers the list away from industrial technologies and health care. Nevertheless, I do believe that the list contains a number of profoundly impactful innovations that have the potential to profoundly change the world, and for the better:

  1. Quantum Computing
  2. Internet of Things
  3. Immersive Visualization
  4. 3D Printing
  5. Blockchain
  6. Servicization
  7. Alternative Energy

iphone-6Each of these innovations have the potential to usher in meaningful change for the world, but the most impactful change occurs at the nexus of these innovations. When Steve Jobs first introduced the iPhone he emphasized that it was three products in one: an iPod, a phone, and a web browser. Flash hard drive storage, cellular data technology, touchscreens, microchips, and digital photography were just a few technologies in the iPhone. Each technology was impressive on its own, but when the designers of Apple conceived of a product and use case that blended all of these technologies, they truly changed the world. Just ten years later it is difficult to imagine our lives without smartphones.

Over the next several weeks I will be posting several articles on a few of these innovations. The order of the innovations, from one to seven, is not in order of importance. Instead, I’ve arranged them in a sequence in which the possible interactions between different innovations become more apparent as we progress from one innovation to another.

Quantum Computing

While I am unlikely to post a standalone article on quantum computing because I struggle with understanding the science of the, the innovation will have profound effects on our world. Quantum computing will make computers faster, and importantly, allow computers to perform vastly more computations in a reasonable amount of time than they can now. Artificial intelligence, one of Dr. Segar’s world-changing technologies (#5: Cloud/Big Data), requires many computations to be rapidly undertaken. Certain applications of this technology have abutted the frontier of what computers can currently handle. Quantum computing promises to let artificial intelligence break through this ceiling and unleash all of its world changing promise.

QC

IBM’s quantum computer

Wired put together a pretty good primer of QC: https://www.wired.com/story/wired-guide-to-quantum-computing/

Internet of Things

The internet of things (IoT) is a term used to describe the proliferation of internet-connected devices: wearables, home control devices, sensors of all types, industrial monitoring devices, and more. I see these devices as two-way doors between the digital and physical world. Entering the information superhighway are all of the data that the devices collect. In many instances, these are valuable data that were never available before. As two-way doors, these data can interact with decision algorithms and then send instructions back to the devices, controlling environments remotely and automatically. Health care, industry, agriculture, urban environments, our homes, vehicles, and many more applications can all be automated and improved by IoT monitoring and control.

Immersive Visualization

I am publishing on article on Immersive Visualization on October 29, 2018.

3D Printing

Manufacturing is not only capital intensive because of the need for machinery, but also because of the need for working capital. Injection mold casts can run into the millions of dollars, and many manufacturing processes remove material until the final product is produced, which inherently results in a lot of waste. 3D printing promises to eliminate these inefficiencies by allowing materials to be precision deposited precisely where they belong. As the price of precision of 3D printers come down manufacturing can also become more decentralized, revolutionizing supply chains and logistics, not just manufacturing itself. However, when you weave the promise of immersive visualization into 3D printing, a lot more opportunities open up and present themselves. 3D printing could really unleash a wave of at-home design and prototyping. The printers are becoming more accessible, but so too is the software and immersive tools that more and more people can use to make and design. With immersive visualization, people can bring their Legos to life. With 3D printing, people can generate completely new Lego bricks and designs for their Legoscapes!

img_20180430_181950

3D printing Star Trek emblems. Scotty, beam me up!

Beyond wealthy children playing with LEGO, 3D printing holds promise for low- and middle-income countries. 3D printers can help remote and impoverished communities obtain parts and materials that were hard to come by before, and the printers can also become sources of income as mini-manufacturing sites.

Blockchain

The internet, especially in 2017, is awash with articles expounding the promise of blockchain. I will not attempt to explain what blockchain is nor write an article announcing its world-changing potential. Other people have already written such articles. Here’s a short one:

https://hbr.org/2017/03/the-promise-of-blockchain-is-a-world-without-middlemen

My talented classmate has also written a series of articles on the innovative software technology:

https://www.linkedin.com/pulse/powered-blockchain-jim-rosen/

https://www.linkedin.com/pulse/business-blockchain-jim-rosen/

https://www.linkedin.com/pulse/business-blockchain-part-ii-jim-rosen/

Servicization

I am publishing an article on Servicization on November 5, 2018. However, it seems that some of my professors have already jumped the gun and published a paper on servicization.

Alternative Energy

In some ways, alternative energy is last on the list because it the final, necessary, innovation for the world to change. Climate change is existentially threatening for billions of present and likely future humans. Without alternative energy, there will be no world to change for all of these people. Alternative energy is also at the end of the list because the other technologies all hold promise for accelerating and improving the development of alternative energy and how we manage the energy that is already produced, particularly quantum computing, blockchains, and servicization.

%d bloggers like this: