The Early Pixar Innovation Team Comes Together:

John Lasseter joins the CG Team at Lucasfilm

Jay Rao and Jim Watkinson


In the past two blogs, we introduced the first two co-founders of Pixar, Ed Catmull and Alvy Ray Smith. Both these men were great computer scientists and computer animators. They had built up a small team of computer graphics experts at NYIT and were well on their way to creating and developing several of the foundation technologies of computer animation – Rendering, RGB Paint, Soft Edges, Video Editing, Hidden Surfaces etc. Nevertheless, both Catmull and Smith felt that there was one big void in their team—storytelling and directing skills.


After the huge success of the first Star Wars movie in 1977, George Lucas and his experts at Lucasfilm realized that they needed better production methods, new tools and technology for the sequel – The Empire Strikes Back – that was scheduled for 1979. Just to understand the challenges they faced, the special effects group at Lucasfilm, ILM (Industrial Light and Magic) had taken eight months to create just 30 seconds of the opening sequence of the first Star Wars movie. Star Wars had won an Academy Award for its visual special effects and the audience would expect something similar or better in the sequel. For this, Lucasfilm would need to find people with extensive computer knowledge and new tools and techniques. That search led him to Ed Catmull and Alvy Ray Smith and their team at NYIT.


When asked to join Lucasfilm, Catmull and Smith were happy with the chance to get closer to movie makers. Even though Lucas had asked them to only create hardware and software for the production of live-action films and not really be a part of movie making, the positives were obvious when compared to NYIT. While developing technology for Lucas’ live-action films, Catmull and Smith still kept their dream of one day making a full-length CG animation movie alive and still believed that someday someone at Disney animation would get as excited about computers as they were. They would keep knocking on the doors of Disney as they developed more tools and technology and they would keep getting rebuffed. Luckily, in 1983, they were able to grab one of the best animators and story tellers that Disney had let go – John Lasseter.


Growing up in Southern California, John Lasseter was always surrounded by the arts; his mother was a high school art teacher. As a kid, he was obsessed with Chuck Jones’ cartoons – Bugs Bunny, Road Runner, Daffy Duck—and he would run home after school to watch them on TV. He often drew cartoons during church services. As a freshman in high school, John read ‘The Art of Animation’ by Bob Thomas, a book on the history of Disney animation, which made him think about becoming an animator someday.[1] Though, he finally made up his mind to become an animator after he saw Disney’s 1963 film The Sword in the Stone.[2] Encouraged by his mother, Lasseter wrote to Disney and they invited him to a studio tour.


Despite his passion for animation, Lasseter followed his parents’ and siblings’ footsteps and enrolled into Pepperdine University. However, he soon dropped out to follow his dream of becoming an animator; again, encouraged by his mother. In 1975, Disney had launched an animation course at the California Institute of the Arts (CalArts). The course was taught by some of Disney’s legendary animators Eric Larson, Frank Thomas and Ollie Johnston. These three were among the famous “Nine Old Men” that had worked directly with Walt Disney. Lasseter was the second student to be accepted into this course. Lasseter’s classmates included future famous film directors Tim Burton (Batman, Plant of the Apes, Charlie & the Chocolate Factory) and Brad Bird (The Incredibles, Ratatouille).


At CalArts, Lasseter and his friends couldn’t have asked for better teachers; a group that had taken animation from infancy and created a new art form and an industry.[3] These incredible teachers had worked on the early classics like Snow White and Cinderella and the students heard first-hand stories of Walt Disney’s way of thinking and methods. While at CalArts, Lasseter produced two animated shorts; Lady and the Lamp (1979) and Nightmare (1980), and both won the Student Academy Award for Animation.


During summer breaks from CalArts, Lasseter worked at Disney—first as a sweeper in Tomorrowland and then as a boat ride operator on the Jungle Cruise. This experience of having a captive audience and a script of corny jokes in hand was a great training ground for Lasseter to understand comedy, comic timing and the art of delivering puns and jokes. In 1979, post-graduation and based on his success with the Lady and the Lamp, Lasseter was offered a job with Walt Disney Feature Animation as an animator. This was no ordinary feat. At that time, Disney would review nearly 10,000 portfolios to choose just 150 apprentices and an even smaller number, about 45, would get permanent positions. Bird and Burton were also among those chosen few that joined Disney with Lasseter.


Lasseter’s time at CalArts and his early Disney years coincided with the Star Wars trilogy. Steven Spielberg, Geroge Lucas, Martin Scorsese and Francis Ford Coppola were changing the nature of movies and movie making. Their movies seemed to appeal to all ages. In fact, Walt Disney himself had done the same thing during his time, a subtle detail that had been forgotten at Disney. Lasseter felt that cartoons and animation was again ripe for a similar revolution. At around the same time, Lasseter came across some videos of computer graphics and computer animation. He was mesmerized by it. Though not by what it was at that time, but for its future potential. In 1982, Disney released the live-action movie Tron with some computerized special effects. Again, he was blown away. Lasseter knew that Walt Disney himself always wanted to get more dimension into animation and he was convinced that computers could do it![4]


Lasseter had hoped that the animation group at Disney would embrace computer technology. But, his boss had told him explicitly to forget it. He kept running into resistance whenever he suggested or tried new things. Several times he was asked to do what he was told and that his opinions were not going to matter until he had at least 20 years for animation experience.


Tron was made by a different division of Disney; not animation. So, with the help Tom Willhite, a live-action executive, Lasseter put together a 30-second test. It combined the hand-drawn, two-dimensional Disney-style character animation with three-dimensional computer-generated backgrounds. Totally exited with the test, Lasseter wanted to do a full-blown movie by applying this technique. Again, with the help of Willhite, Lasseter obtained the rights to a story called “The Brave Little Toaster,” by Thomas Disch.


Lasseter made the pitch for the film to his supervisors, animation administrator Ed Hansen, and head of Disney studios, Ron W. Miller. The project was cancelled, citing the lack of perceived cost benefits for mixing traditional and computer animation. Also, immediately after the meeting, Hansen summoned Lasseter into his office and told him that he was fired.


Lasseter was devastated. He never told anybody that he was fired. All he ever wanted in life was to work for Disney.


Lasseter later found out that most executives in animation had made up their minds even before he pitched the idea. In his over-enthusiasm to get the project in motion, Lasseter had gone around some of his direct superiors and unknowingly stepped on many toes. Lasseter’s experience at Disney was not unique. The enthusiastic new generation at Disney—Lasseter, Bird, Burton—kept giving suggestions and ideas and they all either left the studio or were fired.[5] According to Lasseter, the people who were running the studio at that time were the second tier animators during Walt Disney’s time but they had ended up being in charge through attrition rather than because of their talent. The executives were threatened by the young talent coming in from CalArts that were trained by some of the original Nine Old Men.


While trying to put together the pitch for “The Brave Little Toaster,” Lasseter had been looking for people who could do computer animation. That search had led him to some of the world’s best computer scientists—Ed Catmull and Alvy Ray Smith and their team at Lucasfilm. After getting fired from Disney, as luck would have it, Lasseter ran into Catmull at a computer graphics conference. However, Lasseter couldn’t admit to Catmull that he had lost his job at Disney. But, upon learning that Lasseter’s “Toaster” project was scrapped by Disney, Catmull invited him to come up to Lucasfilm for some help on a project.[6]


Lasseter’s genuine animation experience and great story telling ability would become the perfect match for the future that Catmull and Smith were trying to build.[7] Catmull and Smith knew that George Lucas would never approve hiring an animator and storyteller for a department intended to develop computer tools. A creative solution was found by hiding Lasseter’s real role by giving him the innocuous title of Interface Designer.


Lasseter began work part-time in fall 1983 designing the characters for what would become the computer division’s first short film, Andre and Wally B.[8]


Dear reader, below are a few things to consider and reflect about this blog:


If you recall from the previous blog, Alvy Ray Smith was also let go from Xerox PARC for being a renegade. How can we better manage renegades? How can the enterprise create a little sandbox for these renegades to explore and play? How does technology and art interact? How can artists and technologists interact?


[1] Luxo Sr. – An Interview with John Lasseter, by Harry McCracken, Animato 1990, accessed, June 29, 2015

[2] Lunch with the FT: John Lasseter, By Matthew Garrahan, Financial Times, Published: January 16 2009

[3] Pixar’s Magic Man, by Brent Schlender, Fortune, May 17, 2006

[4] Pixar’s Magic Man, by Brent Schlender, Fortune, May 17, 2006

[5] Lunch with the FT: John Lasseter, By Matthew Garrahan, Financial Times, Published: January 16 2009

[6] Pixar’s Magic Man, by Brent Schlender, Fortune, May 17, 2006

[7] To Infinity & Beyond, The Story of Pixar Animation Studios, by Karen Paik, 2007, Chronicle Books

[8] Droidmaker, George Lucas and the Digital Revolution, by Michael Rubin, 2006, Triad Publishing

The Early Pixar Innovation Team Comes Together: Alvy Ray Smith Joins Ed Catmull at NYIT

Jay Rao and Jim Watkinson

Many times the seeds of innovation are planted long before the tree takes root and later blossoms into fruit.

Before there was a Pixar, or a Toy Story, before the world knew much about George Lucas, or saw his first Star Wars film, or were amazed by his special effects makers at Industrial Light and Magic, there was a trade school on Long Island, NY, The New York Institute of Technology (NYIT), whose President had a passion to create great animated film, and the money to pursue this goal. As we learned in our last blog article, that man, Alex Schure, hired Ed Catmull in fall 1974 to direct his new computer graphics lab, and soon Ed was joined by his friend and fellow University of Utah graduate, Malcolm Blanchard. As winter of 1974-75 progressed, the small team would benefit from a very short-sighted decision by a major corporation that would lead two other pioneers in computer graphics software to join them.

Turning back the clock.

As he lay on his hospital bed in a full body cast contemplating the ceiling tiles, Alvy Ray Smith had plenty of time to think (the result of a broken leg from a skiing accident). After receiving his PhD. in computer science in 1970, he had spent the last few years as an associate professor teaching computer science at New York University. Though he enjoyed teaching, Smith felt a sense of dissatisfaction. There were very few people in his very specialized niche in the computer field, making it hard to even find someone to talk to. At a deeper level, Smith whose first love was art, found himself missing the personal satisfaction that comes from artistic creation. He was also disturbed that at this early stage in the development of computer technology, the leading work in the field was being driven by military requirements. This left him feeling that his teaching was contributing to things he could never support. He couldn’t continue this way.

With no particular plan in mind, in late 1973, Smith decided to leave his teaching position and set out for California, confident that something would develop. From his prior experience as a graduate student at Stanford, he was drawn to the San Francisco area as a hot bed for people aiming to develop the next wave of technology. After several months, he found himself with a writing project that required research at Stanford’s library. This was a long drive from where he was staying, but it was close to his friend, Dick Shoup, in Palo Alto. Shoup agreed to let Smith stay at his place, and soon their conversations turned to Shoup’s favorite subject: the SuperPaint project he was working on at the nearby Xerox Palo Alto research center known as PARC.

Although Shoup had tried unsuccessfully to interest Smith in SuperPaint earlier in the year, now upon seeing a system where he could draw color images with a stylus on a tablet, or manipulate existing video and still images, all while seeing them on the computer screen in full form (a first at the time for raster images), Smith was amazed and spellbound. He knew he was looking at the future, and came back a few days later to spend the whole day working with the system. Later he would remember this and remark “Art and computers, I was in heaven.”[1] He had found his next thing.

Shoup needed help testing his new system and Smith’s artistic understanding and computer background were a perfect fit. PARC management could not see the need for hiring a computer artist, so Shoup’s colleague, Alan Kay (another incredible innovator who shaped the future of computing) arranged for Smith to be paid each week through purchase orders like a vendor. As a result, in August 1974, Smith had a new, but unofficial profession he would later describe as artist-in-residence at PARC.

The capabilities that allowed SuperPaint to work were the result of a computer board Shoup had made by hand to expand memory and modify the operation of a mini-computer. This modified computer was called a frame buffer, and through Shoup’s efforts, Xerox PARC was the first to be able to create color computer imagery and modify existing video or TV images.

To promote the capability of SuperPaint, from fall of 1974 into the early winter of 1975, Smith, and fellow computer art devotee David DiFrancesco, pushed Shoup’s computer paint system, with Smith using its color palate, custom shapes, and virtual brushes to make demonstration videos that portrayed motion, while mixing shapes and colors. Each image gave him new ideas, and he began to modify the software by creating controls that allowed the user to blend colors as an artist would do when actually working with real paints on a canvas.[2]

Their work came to a sudden halt in January 1975 when Smith was told that a corporate decision had been made to drop work on color images because they didn’t fit with the Xerox corporate plan for the office of the future. Management had decided to focus only on black and white images for their business customers and was ending the project for SuperPaint software and its frame buffer hardware.

For Smith and DiFrancesco the news wasn’t just a loss of work, PARC was the only place that had a frame buffer. Unless they could find someone who had one, their experimentations were over. Smith immediately went to work on the phone and he learned that the University of Utah would soon be receiving the first commercially made frame buffer. The two were off to Utah.

When they arrived at the University, it was clear that art and computers weren’t exactly a fit at the University of Utah computer department, where most research was being funded by a military agency. There would be no role for them here, but speaking with PhD candidate Martin Newel (maker of the famous early CG image: Newel’s teapot), the two seekers heard about the wealthy school president who had recently ordered one of every type of CG equipment made by famous professors David Evans and Ivan Sutherland, including a frame buffer, for his new computer graphics lab.[3] Newell mentioned he would be visiting this new lab at the New York Institute of Technology in a few days, and promised to call Smith and DiFrancesco with more information upon his return. Several days later Newel called Smith to describe how he was impressed with Shure’s research and animation plans, and to suggest that Smith and DiFrancesco could be needed there. Moving quickly, the duo were off again on their quest, this time across the country to this potential land of CG bliss. To open the door for them, Newell phoned the NYIT Computer Graphics Lab and told Ed Catmull about the two knowledgeable computer artists he was sending over to him.

The two computer seekers were in for a surprise when they arrived at the CG Lab. Knowing the incredible amount of advanced equipment on order, Smith and DiFrancesco were expecting to see a large staff busily at work, but when they walked in they found only Ed Catmull and colleague Malcolm Blanchard. After describing Shure’s plans for animated film, and exchanging views about the many uncertain problems to be overcome, Catmull knew that the lab needed the brainpower that Smith and DiFrancesco could deliver. After they all met with Shure, he agreed to add the newcomers to the team.

With Catmull, Smith, Blanchard, and DiFrancesco together, a core group was formed that would work together for most of the next 15 years, advancing the art and science of computer graphics and becoming founders of Pixar.

Through the next few years, the Computer Graphics Lab team would grow, and produce many key innovations for digital images and animation[4] A brief list of some of these would include software for a computer to automatically generate the frames between key frames of action. Called Tween, and developed by Ed Catmull, this innovation significantly cut down on the labor needed to produce animation. Alvy Ray Smith would create the Alpha Channel, a way to combine separate image elements into one image. The list of firsts achieved at NYIT is so long that years later, Ed Catmull tried to recollect the achievements during this period and had to stop at four pages.

Although Shure was boundless in his support of the computer graphics group, many important components of film-making were missing at NYIT, such as story development and directing. Catmull and Smith recognized this problem and knew they would have to find a way to build a relationship with a film studio if they were to ever break into the business of animated films. With this in mind, the two men visited several Hollywood studios each year to showcase their latest abilities and convince them of the value that CG could contribute to their movies. Doing a quick calculation of computer costs verses their estimate of the computing power needed to make a complete animated film, Catmull and Smith recognized it would be 15 years before animated film production costs would come down to affordable levels. As a result, the two focused their conversations with studio people on how their new CG tools and techniques could be used to improve the production and portrayal of their film stories.

As the king of animated films, the Walt Disney studio was always their prime target, and over the next few years, Catmull and Smith would regularly try to convince people there that their team’s advances in CG could be used to create a system for computerizing the coloring of their animation frames (inking and painting) and also seamlessly combine multiple image layers to produce a 3-dimensional look. This would bring Disney animations back to the rich, colorful texture seen in the studio’s famous earlier films, while also allowing for more complex scenes and engaging stories.

The capability such a system could contribute would be a leap forward in the audience experience delivered with animated movie-making, but it was new, untried and coming from people with no experience in film-making. They were also pitching the idea to a company that was very different from its days under Walt Disney. The openness to adapting new technology that had been a key to Disney success while co-founder Walt Disney was alive had ended with Disney’s passing. The Walt Disney Company of the 1970’s had become a giant enterprise on the strength of the huge success of its theme parks, from which it derived most of its revenue, and the lion share of its profits.

Animated films had always been a hit or miss adventure, even during Walt’s days. Now, having finally achieved strong and consistent revenue and profits flowing from the parks, company leadership, which had no experience with animated film-making, saw no need to invest in new and risky film-making ideas. The stagnation this produced at the Disney animation studio had reached the point where its films, while producing modest profits, all followed the same formula, repeating similar storylines, and even reusing characters and scenes from prior films.

Down at the studio level, there were deeper concerns about computers. With all the change these machines were beginning to cause in other industries, there was a strong fear among the animators that they were intended to replace people. Drawn to their work by a love for the process of putting heart and soul into their film characters, they scoffed at the idea that a machine could bring a character to life as they did, and they would have nothing to do with computers.

This same feeling about the evils of computers was shared at all the studios in Hollywood, and Catmull and Smith could find no one willing to take a chance on employing computer systems in a way that would push the art of their cinema product. The idea of art itself was a key ingredient in this rejection. Everyone involved in film-making saw their product as a form of high art, but art is something made by people, people with deep feelings to be portrayed in characters and a story that sweeps the audience along on a shared journey. Machines, such as computers, have no feelings or experiences to share; they could only deaden the art of film. Clearly, there were many reasons why computers and film were a mismatch at this time for most Hollywood movie-makers.

Finding no doors open at Disney animation, Catmull and Smith would use their visits to speak with the only people at Disney that were interested in computers and technology: the department of Scientific Systems. Here, computers were used to control park safety, and in the operation of the rides and automated shows. Working with this department didn’t exactly match their goal to gain access to film-makers, but perhaps working with one part of the Disney enterprise would help one day to open the door into animation. But even among the operations people who already worked with computers, Catmull and Smith were warned that things here move only very slowly.[5] These comments would prove to be very prophetic, as it would take another seven years, and a complete change in Disney leadership, before Disney and the future Pixar team would strike a deal for a computer system to improve animation production.

Dear reader, below are a few things to consider and reflect about this blog:

  1. The incumbents will reject and marginalize new technologies / disruptive business models. Here we have described Xerox, Disney, & Hollywood’s way of rejecting a technology that would later transform not just business and film imagery, but also transform all medium of imagery.
  1. Innovators are fired by incumbents for being heretics. As we see here, Alvy Ray Smith’s project for color computer graphics was ended at Xerox PARC because it did not fit the orthodoxy of the corporate vision for the office of the future.
  1. Who are the heretics inside your firm? How are they treated? How can we identify them? How do we create a “sandbox” for them to play?

[1] Moving Innovation: A History of Computer Animation, by Tom Sito, MIT Press 2013

[2] Moving Innovation: A History of Computer Animation, by Tom Sito, MIT Press 2013

[3] Droidmaker: George Lucas and the Digital Revolution, by Michael Rubin, Triad Pub Co, 2005

[4] Moving Innovation: A History of Computer Animation, by Tom Sito, MIT Press 2013, p 130-132

[5] Walt’s People: Vol 11, by Didier Ghez, Xilibris Corp., 2011

The Early Years for Pixar Co-founder Ed Catmull [1]

Jay Rao and Jim Watkinson

Walt Disney and Albert Einstein were the two boyhood idols of Ed Catmull. At a very early age, Catmull had read Einstein’s biography. Growing up in the 1950s, he was fascinated by how Einstein’s concepts had forced physicists to change their perspective of the universe. While Catmull was inspired by both, Disney had a much greater affect. Walt Disney and Disney’s creations were regularly in living rooms, in that period. Disney was constantly inventing – applying existing technologies, modifying them and creating totally new forms to perfect sound, color, cameras, and screens. Disney routinely incorporated breakthrough technologies and talked about them on his show to highlight the relationship between technology and art. One specific show seemed to have a great impact on the young Catmull—a Disney animator’s pencil, starting from nothing, moved around and brought Donald to life as he wooed Daisy. Most people probably couldn’t grasp how technically sophisticated Disney’s movies were or how groundbreaking was the synergy between technology and art. But for young Catmull it seemed to make sense.

At school, Ed Catmull loved his art class. He would often get lost while being totally engrossed in the act of putting an object to paper; just like the Disney animator had demonstrated on TV. Catmull dreamed of being a Disney animator. However, even at a very young age, Catmull recognized his limitations as a paper-pencil artist. Further, he had no idea as to how one would even become an animator. He knew of no schools for animators. So, when he finished high school, he decided to pursue physics instead.

Catmull graduated from the University of Utah with a double-major—physics and computer science. In the university he naturally gravitated to the emerging field of computer graphics. He realized that he could achieve his dream not with a pencil but with a computer—to make compelling images on a computer. Images beautiful enough to perhaps be used in movies. While in graduate school, at the age of 26, quietly, Catmull set a goal – make the first computer-animated feature film.

Ed Catmull’s experience in the computer science department at the University of Utah was transformative. Professors Ivan Sutherland and David Evans were already legends in the field of computer graphics. In addition, they had done pioneering work in the areas of virtual reality, printer languages and real-time hardware. So, the department was a magnet for bright students. Some of Catmull’s classmates were: Alan Kay (inventor of Smalltalk language, object oriented programming pioneer and of windows GUI fame); John Warnock (founder of Adobe Systems) and Jim Clark (founder of Silicon Graphics and Netscape).

Sutherland and Evans had created an amazing “innovation sandbox.” They would bring together students with diverse interests, gave space, access to computers and with a little guidance they let each one pursue their passion. For the first time, Catmull was exposed to a creative environment where both individual creativity and collective creativity thrived together. At one end was individual excellence driven by passion and at the other end was a group that excelled because of the diversity of thought and multiplicity of views. The result was an energizing, collaborative, supportive community so inspiring that he would later seek to replicate it at Pixar.

Some questions to ponder?

How is your “innovation sandbox” doing? How diverse is it? Does it foster “creative collisions” of diverse ideas and approaches? Does it provide time and space for both “individual and collective creativity” to thrive together?

A link to my new video: “Five Steps to Build an Innovation Sandbox.” ow.ly/G1Mfw

Wishing you all a wonderful holiday season and the very best in 2015!

[1] Heavily sourced from Ed Catmull’s book: Creativity, Inc. 2014, Random House

By Jay Rao and Jim Watkinson

We live in an age dominated by machines, sensors, software, and automation, so it is easy to lose sight of the fact that all great innovations have a common thread running through them – people. Regardless of discoveries and technology, nothing new and important has ever been created without great people with a passion to find new solutions.

The importance of this has been made increasingly clear to us as we move forward to complete our upcoming book examining the early innovation years at Pixar. So it is appropriate that we continue in our blog articles previewing some of our book’s concepts by focusing on several of the key people who influenced, or were directly involved in starting and building Pixar during its early difficult struggles to become a maker of great animated films.

One very famous entertainment pioneer had a strong influence on several early Pixar people, and it was the work and inspiration of this man, and his company, that moved them toward animation and computer graphics. As a consequence, the whole story of how Pixar’s founders came to the field of animation begins many years earlier. To understand the source of that inspiration, and how he sparked the thinking and motivation of others that followed we must wind our story way back, back to a time when there was no recorded entertainment.

The Beginning of Cinema

The birth of cinema and the film industry as we know it began with a wealthy man, a photographer, and the legend of a wager.

While at his horse ranch, former California Governor and future founder of Stanford University, Leland Stanford, often admired the beauty of the horses in motion as they galloped. To his eye they appeared at certain points to be flying across the ground without any hooves touching the earth. So convinced was he, that in 1872, he made a large bet with a friend on this point, and engaged well-known photographer, Eadweard Muybridge, to discover a way to photograph and capture proof of his belief.

Muybridge conducted his first effective test in January 1873, and his photos did indeed show that horses have all four feet in the air between strides, but his initial images were blurry because of the speed of the horses and technical limitations of the camera. To compensate for the blurring, Muybridge had an artist create hand-painted versions of the photos, and then photographed these images for showing to the public. Viewers, however, recognized they were painted, and judged the work with cynicism as a proof of Leland’s premise.

Some years passed, and in 1878, with additional money from Stanford, Muybridge took up the project again. This time, the photographer used a faster electro-mechanical shutter he had developed capable of freezing the horse in mid-flight for a sharp picture. He combined this new shutter with a system of twelve cameras capable of taking photos in quick succession. The new process worked, producing a sequence of shots clearly showing a horse throughout its running stride, and now all could see that horses did fly between strides.

Much acclaim followed, and Muybridge next adapted a child’s toy, a zoetrope, to put the images in motion. He applied photos to a glass disk that, when spun, showed the images in quick sequence, giving the illusion that the horse was moving right before the eyes of his audience. To correct the image compression that occurred in the process, Muybridge had an artist draw the pictures, and it was these hand-drawn images, in a sense an early form of animation, that were used in the show. Muybridge called his device a zoopraxiscope.

In February of 1888, Muybridge was in Orange, New Jersey, to give a lecture about his techniques. In attendance, was the well known inventor, Thomas Edison. Seeing an opportunity at hand, Muybridge proposed that the two partner to join his motion picture device with Edison’s phonograph, resulting in a machine that would play motion pictures and sound together. After some examination, Edison decided his own staff could produce a much better device and Muybridge’s zoopraxiscope concept did not advance any further.

With his interest sparked, Edison’s engineers and photo experts worked to design an effective motion picture device, and over the next two and a half years they would invent a new machine for recording motion pictures called the Kinetoscope, which became the first successful motion picture camera. Many other innovations would come along to power the growth of cinema, and the list of those who contributed to the development of the film industry is long. But despite his lack of success, Muybridge is still called the Father of Cinema, and some would say that his use of hand-drawn images in his first short film, marks animation as the true starting point for what would become the Hollywood film industry.

Through the first 30+ years of film, animation was relegated to a minor role in cinema, serving to provide short, humorous cartoons to warm the audience up before the feature film was shown. But that began to change when a man whose company had created one of the most popular cartoon characters leapt into the unknown, spending a previously unheard sum of money to produce what would become the first successful animated feature film.

The Inspiration of Walt Disney

I do what I do because of Walt Disney – his films and his
theme park and his characters and his joy in entertaining.
– John Lasseter

People may find it hard to believe that the man who transformed short, humorous cartoons into highly popular and profitable feature films was not a very good artist, but it is true. Walt Disney was, however, gifted with the ability to see stories, scenes and characters in his mind, and then act them out with such convincing drama that people who could draw were able to transform his ideas into characters and films filled with human emotion, and loved by millions of people of all generations.

Nearly broke after his first animation business failed in Kansas City, Disney decided to aim high and in 1923, at age 21, he moved to California, where his older brother Roy and an Uncle lived. He hoped to gain a job directing live-action films, but after two months talking with film studios, all that Disney had gained was frustration. Now struggling without money, but still possessing some sample animations he had prepared in his old business, he sent a note and copy of the film to a New York cartoon distributor, proposing he could make this story about a girl named Alice into a series. To his surprise, he quickly received a note back accepting his proposal at the then astounding amount of $1,500 per short film. With the letter in hand as evidence, Walt visited his brother Roy, who was in the hospital recovering from the effects of tuberculosis. Walt told him a story of how he could create great cartoons and make a lot of money if Roy joined him in the business. The next day, without his doctor’s permission, Roy checked himself out of the hospital, and soon the two brothers had a place to setup their new business, the Disney Brothers Studio. . . in their uncle’s garage.

When ideas for creating new episodes for the Alice series dried up in 1927, the Disney’s created a new rabbit character that proved very successful. They were expecting big things from their rabbit, but the Disney brothers learned a hard lesson when through the use of some fine print in their contract, their distributor took over production of the rabbit series, cutting the Disney’s out, and hired away most of their staff. Left with no character series to sell and no other source of revenue, the Disney’s future looked grim. In desperation, Walt came up with an idea for a new mouse character. Working over the following weeks with one of his few remaining employees, Ub Iwerks, they turned the mouse into what we now know as Mickey Mouse.

Though famous now, the first two Mickey Mouse cartoons were rejected by all film distributors because there were already many animal character cartoons. Walt needed to find a way to make his mouse stand out. Sound had only recently been added to live action films, yet sensing an opportunity, Walt looked for a way to add sound to his next Mickey cartoon. He needed to make it unique from competing characters, so he hired a full orchestra and found a way to time the music to the beat of the cartoon and the actions of its character (a first in cartoons). Distributors found the new Mickey very entertaining, but were still reluctant to buy, and with bills to pay, Roy Disney had to sell Walt’s car to raise cash. Having no other prospects at hand, Walt broke with accepted practices and had Mickey shown in one theater in New York City with the hope that a strong audience reaction and news coverage would build credibility and draw in a distributor. The ploy worked. The little mouse received great applause and acclaim in the local press, and within the year he was a national hit, making the Disney’s famous.

Despite the worldwide fame of Mickey Mouse, the cartoon business was very competitive and pricing for these short films was falling, while production costs were rising. As a result, by 1933, Disney was losing money on his Mickey Mouse films. Recognizing they needed a new film product that could generate more revenue than short cartoons, Walt decided to make another big jump, this time to create the first successful feature length animated film, Snow White. Taking more than 3 years to make, the cost of the film ballooned to over $1.5 million, then a record for any film, and forcing Disney to borrow huge sums of money. Before it was released, some industry experts predicted that no one would sit through a ninety minute cartoon, but Disney proved them all wrong, when thousands of people waited in long lines to see his Snow White, generating $10 million in ticket sales and more in merchandise.

Walt Disney would go on to create many more firsts in innovative entertainment products, most of them dismissed at first by the experts as impossibly crazy, and sure to fail. A full list would fill most of this page, but some of his best known ideas would include the first popular educational nature films, Disney’s True-Life Adventures, which his distributor refused to sell, certain that no one would pay to see short films about nature and animals; the Walt Disney TV show (continued for many years and known by many names); Disneyland, the first theme park, that amusement park experts told him would fail; Walt Disney World, the world’s first destination resort; and life-like mechanically controlled animal and humanoid figures called audio-animatronics.

Each of these ideas moved the Disney Company not with a single step, but a giant leap into an area it had no prior experience, in fact, into areas where no one had ever gone. Unlike the characters in his films that have remained perpetually alive, Walt’s time here on earth ended in 1966, and with his passing, the end also came for the long string of giant-leap innovations by his company.

By the time of his passing, the theme park business had grown larger than all their film operations, and now generated the lion’s share of the company’s profits. As a consequence, the always risky, hit or bust business of films received far less attention from company leaders and began to stagnate. The Disney Studio’s next generation of films all had new titles of course, but they were often just repeats of old themes and characters. This was particularly true for the animation division from which all of the Disneyland attraction ideas sprang.

Company leaders could not see that the creation of new films and characters were the source of new attractions in the park, and that this in turn drew visitors back year after year, and drove park, hotel and merchandise revenues and profits. Thus Disney films, a winner of 48 Academy Awards during Walt’s time, now began a two decade decline. So despite the many successes of his company, and a large pool of very creative people, once Walt was gone, so too was the Disney magic for films and characters loved by children, teens and adults alike. Without an opportunity to advance and make their own mark on the world, good people left Disney, and other good people would never come.

One of the people who never came, was a young graduate student named Edward Catmull. Visiting Disney to recruit their involvement in computer graphic research while he was a graduate student at the University of Utah, Catmull was instead offered an intern job to apply his computer knowledge to the design of a new ride at Disney World. He declined the offer because it had nothing to do with his main interest – making computer animation.

Next in our series: The Early Years for Pixar Co-founder Ed Catmull

The Often Long Journey to Radical Innovation

 With Concepts Drawn from our Upcoming Book:

Innovator’s Grit: Pixar’s Perilous Innovation Journey


 Jay Rao and Jim Watkinson

If you have read any business periodicals over the last two decades you might easily conclude that the path to successful innovation is normally a very short journey. But the world we see around us has been largely shaped by ideas that took many years to develop, and often taking even longer to gain wide market acceptance.

As with all maxims, there are of course exceptions, and we can certainly point to many cases where success was reached very quickly. Take for example the mobile photo-sharing application Instagram. Begun in Oct. 2010 by twenty-something entrepreneurs Kevin Systrom and Mike Kreiger, who had met as students at Stanford seven years earlier, and had also taken the same work-study program for entrepreneurs. While working at Google and through personal contacts, Systrom had already gotten to know a wide range of entrepreneurs, angels and venture capital friends before starting Instagram. Once launched, the ability of their service to connect people helped push it out to the millions of younger internet users who were eager to reach out to the world, causing a strong viral-effect resulting in rapid product adoption, and quickly building a large base of users. Within weeks, Instagram was carrying the images and messages of millions of people. In April 2012, Facebook bought Instagram for a billion dollars.[1]

This and other similar stories have spurred an entire generation of wanna-be entrepreneurs armed with the vocabulary of get-rich quick speak, and incessantly looking for their insta-million, or even better, insta-billion opportunities. They network furiously, make rocket-pitches, do hackathons, and have exit strategies even before having a single paying customer. This need for fast riches has also infused the thinking and decision-making at medium and large enterprises, along with their shareholders, resulting in a shortening and narrowing of their view towards acceptable innovation ideas.

Thankfully the epidemic of innovation near-sightedness has not killed off all entrepreneurial courage and toughness, even in the notoriously short term internet space. Rovio is the maker of the famous game Angry Birds. Rovio was founded in 2003 and for eight years they labored through 51 different game releases without a significant hit. Finally, on their 52nd attempt they had a hit with Angry Birds reaching 1 billion downloads. Instagram makes Rovio looks like an old and tired way of winning the innovation race. But is it really a sprinters race, or is innovation more often the longest of life’s marathons?

The reality is that out beyond the world of software and virally driven adoption, innovators must travel a long and difficult road before reaching even the beginning stage of success. Let’s consider some noteworthy examples.

John Harrison and the Longitudinal Problem

Exactly 300 years ago, in 1714, the British Government established a Board to solve a difficult problem. The Board of Longitude was seeking a solution to determine a ship’s exact location at sea. Ships often got lost at sea during their long transoceanic voyages; sometimes tragically. While determining the latitude (Jay: was there a reason you used the word – altitude- here?) of the ship was relatively easy, there were no easy tools for finding the longitudinal location. So, the Board offered £20,000 (£2.45 million in 2014 terms) for a practical method to determine the ship’s longitude position within 30 nautical miles.

Several clockmakers and scientists had tried to solve this problem with little luck. In 1730, John Harrison an English carpenter and clock maker decided to compete for the prize with some financial help from another clockmaker who believed in Harrison’s skills. Harrison started building an accurate sea clock. The first sea trial took place after five years of work and it proved not too accurate. However, impressed with the general direction, the Board granted Harrison £5,000 to continue development. Harrison abandoned the second attempt after another five years of work when a serious design flaw was discovered. The third attempt lasted yet another 17 years, at which time Harrison determined that a clock won’t work and that it should be a much smaller watch. Having worked another 6 years, the 68-year old Harrison’s first Sea Watch was tested in 1761. The watch was accurate to within 1 nautical mile. The Board deemed the test as luck and demanded another trial. Following another successful demonstration, the Board still balked. Finally, the King had to intervene and the 80-year old Harrison was paid the reminder of the award money in 1773; just three years prior to his death. The persistence, testing and failures were not it vain. Harrison had contributed to several valuable inventions in clock and watch making and his Sea Watches changed the future of naval explorations forever. Fast forward to today and we can see that Harrison’s work to create accuracy in measuring time for travel and location now serves as a key to the operation of the Global Positioning Systems (GPS) that we use to guide planes, boats and people driving their cars.

Soichiro Honda and Honda Motor Company

In 1922, fifteen year old Soichiro Honda left school to work at a motorcycle and car repair shop. While the repair shop gave him tremendous knowledge about the workings of motorcycles and cars, Honda was more interested in manufacturing. In 1936, with a friend he set up Tokai Seiki, a piston rings manufacturing firm. After several tries Tokai Seiki finally got selected by Toyota as a supplier. However, they soon lost it since only 3 piston rings out of 50 met the required standards. Honda attended engineering school but did not graduate. He travelled extensively in Japan to understand Toyota’s production processes and finally by 1941 he was reliably supplying to Toyota. Then came the war. In 1942, Toyota took 40% control of the firm and Honda was downgraded from president to senior manager.[2] Towards the end of the war, one plant of Tokai Seiki was reduced to rubble by air raids and in Jan. 1945 the other plant was destroyed by an earthquake. Honda picked up from these setbacks and tried to manufacture a rotary weaving machine for the textile industry. This failed due to a lack of capital. He then tried to make frosted glass with floral patterns, and then roofing sheets of woven bamboo set in mortar. Uncharacteristically, he never pursued any of these in earnest. His heart didn’t seem to be in any of these initiatives. In late 1946, he was visiting an old friend from Tokai Seiki who happened to show him a generator engine designed for a wireless radio. He was immediately inspired to use it for something very different—to power a bicycle. The Honda motorcycle company was born!

George Mitchell and Mitchell Energy

By 1980, Mitchell was already a Texas natural-gas baron; but his wells were drying up. So, he turned to a decades old technique called hydraulic fracturing – fracking. In fracking, water, sand and chemicals were pumped into a well at high pressure to produce cracks in stone of the deep underground shale layer to dislodge the trapped gas.[3] Though demonstrated in the 1940s, this method had been abandoned since it was deemed commercially unviable. While, George Mitchell did not invent fracking, starting in 1981, his firm Mitchell Energy drilled well after well in the Barnett Shale area of Texas. For the next 15 years the firm struggled to demonstrate through trial and error that fracking could be an economically viable and reliable source of natural-gas. Finally, in 1997 one technique that involved water and sand and inexpensive foams and gels worked spectacularly well. In 2002, at the age of 82, Mitchell sold his company for $3.2 billion. By 2012, Shale gas accounted for nearly 35% of U.S. natural gas production. While still environmentally controversial, one industry historian hailed Mitchell’s fracking technique as the biggest and most important innovation of this century.[4]

Elizabeth Holmes and Theranos

In 2003, 19-year old Elizabeth Holmes dropped out of Stanford and started a revolutionary blood diagnostics firm called Theranos. In the last decade, Theranos has raised $400 million, and now has 500 employees, and is valued at $9 billion. The $73 billion blood testing industry performs nearly 10 billion tests a year that are used in nearly 70% of all medical decisions. But historically, most of these tests are made in hospitals and large, free-standing labs, where work is time-consuming. Emergency labs are faster, but can only perform about 40 different tests. Theranos process performs nearly 70 different tests in the same time and with just 25 to 50 microliters of blood. That is nearly a 100 times less than what most blood tests require. Finally, Theranos charges 50-75% less than what independent labs charge and about 10% of what hospital labs charge. The firm posts all prices online, and aims to never charge more than half the published Medicare rate. Today, Theranos exists only in a few Walgreens drug stores in California. Not bad for an 11-year old firm. But, Elizabeth Holmes’ goal is to have a testing facility within five miles of every American.[5]

Certainly history is showing us that in most industries the path to success is often a long journey. Some ideas, because of the nature of their science, application, and acceptance experience long cycle times where there are no short-cuts. On average it takes about a billion dollars and 10 years to bring a new drug to market. Coca Cola’s internal estimates are that it takes about 10 years to build a $1 billion new beverage franchise. When looking out this far, the road is always fraught with uncertainties, ambiguities, set-backs, failures, losses and heart-aches. The best innovators persist on this journey with sheer resilience, patience and an indomitable spirit that we call INNOVATOR’S GRIT.

Lately, the concept of personal grit has been getting a lot of media attention and especially the work of Prof. Angela Duckworth at the University of Pennsylvania. Her research into the behavior of people shows that those who have achieved outstanding success tend to have a high level of personal passion and unwavering dedication toward the accomplishment of their mission, whatever the obstacles, and however long it takes. In fact, she has found personal grit to be a great predictor of long-term success in a variety of situations, from school performance of kids with difficult backgrounds, to cadets surviving the demands of West Point, as well as the performance of students at Ivy League schools and National Spelling Bee competitions.

There is still much that we need to learn about innovation and the unique form of grit that powers those who succeed at it. But rather than trying to falsely suggest that it can be approached as a simple repeatable formula, we will try to help would-be innovators understand and be prepared for what they will face by bringing to life the true-life story of the more than twenty year journey of Pixar Animation Studio and its people as they struggled to survive through a long series of transformations. This journey begins in 1974 when one of their founders, Ed Catmull, took a position running a computer lab for graphics research at a school on Long Island. Then in 1979, Catmull, and co-founder Alvy Ray Smith and their team moved to become part of Lucasfilm, where they began work to make computerized film production hardware and special visual effects for films. Because of changes at Lucasfilm, the team was spun out as Pixar in 1985, and their new business would focus on making a unique graphics computer and related software. Part of the team here included a small group of animators who produced short animated films to demonstrate the hardware/software. By 1989 it was clear that their hardware and software business would never produce significant income, so they turned to using their animation and software skills to make TV advertising. Finally after knocking on Disney’s door for 16 years, in 1991 Disney called back, suggesting they partner to make the first fully computer generated animated film, Toy Story, which itself took four years to complete. The voyage to reach this first real success was a continuous struggle for survival, and every step was charged with remarkable Innovator’s Grit.

We were drawn to this story of Pixar because of the length of time they struggled before reaching success, the many technical innovations they had to develop along the way, the dramatic transformation they helped to create in film-making, and the many remarkable people involved. Although several books have been written about Pixar, all focus solely on history. For the first time, our research, interviews, and writings for the book will show the story of Pixar and its people through an innovator’s eyes. In coming months we will preview material from the book, along with other stories providing important insights for all executives and enterprises into personal, innovator’s, and enterprise grit.

Some of the topics we’ll cover from the book will include:

The birthplace of Pixar – The New York Institute of technology

The challenges of Pixar’s early film days as part of Industrial Light & Magic (ILM) & Lucasfilm

The innovation history of Walt Disney and George Lucas

Early computer innovations in film-making: Tron, Star Wars, Star Trek II, Jurassic Park.

Pixar’s first project with Disney – The Computer Animation Production System (CAPS)

How culture within the film industry impacted early Pixar and innovation adoption

How Disney moved slowly to accept computer animation

The Pixar Team: Ed Catmull, Alvy Ray Smith, John Lasseter, Steve Jobs

The Disney Corp. Folk: Roy Disney, Jeffrey Katzenberg, Mike Eisner

Pixar’s famous projects: Star Trek – Genesis, Andre & Wally, Luxo Jr, Red’s Dream, Tin Toy.

As well as general discussion and true-life innovation stories on:

Evolution and Development of Technology – Hardware and Software

Industry Life Cycles and Innovation Life Cycles

The Loss of Competitiveness among Incumbents

The Rise of the Disruptors

The Coming Together of Technology and Art

The impact of examining the culture of your enterprise on innovation

[1] Behind Instagram’s Success, Networking the Old Way, New York Times, April 13, 2012

[2] http://world.honda.com/history/ , accessed June 25, 2014

[3] Exxon’s Big Get on Shale Gas, by Brian O’Keefe, Fortune, April 16, 2012

[4] He fracked until it paid off, by Jon Gertner, The Lives they Lived, NYTimes Magazine, 21 Dec. 2013

[5] This CEO is out for blood, Fortune, June 12, 2014

Key Words: Innovation; Innovator’s DNA – Networking, Observing, Questioning, Associating, Experimenting; Organic Growth, Culture of Innovation – Purpose, Mastery, Autonomy; Knowledge Work Productivity; Manual Work Productivity

Recently, Lydia Dishman, an innovation and entrepreneurship contributor to Fast Company, asked me to comment on a trend in the workplace – tracking of employee collaboration and productivity using wearable technology devices. You can read my comments in her Fast Company article titled: “Can Performance Be Quantified? Wearable Tech In The Office.” In this blog, I will elaborate on several of the comments I made for the article.

Problem: All the developed countries today are predominantly service / knowledge based economies. Upwards of 70% of the employees are working in these sectors. While this has been true for more than 20 years now, unfortunately, productivity in the service sector has never reached the levels of productivity in the manufacturing and/or agricultural sectors. Quantifying, capturing, tracking and improving Productivity in the knowledge sector has been even more difficult; hence the interest in this topic. Please note: I make a very clear distinction between the low-wage service jobs and the relatively higher-wage knowledge work.

Solution: Wearable Technology that tracks employees. For example, the Hitachi’s Business Microscope is a device that employees have to wear around their necks at work. It measures and analyzes the employees’ interactions and activities. When the employees come within a specified distance of each other, they recognize each other and record the face time, body and behavior rhythm data to a server. Executives can then analyze which groups tend to interact and cooperate. So, where are we heading with these sophisticated “dog tags?”

Trend: In the last 5-7 years, based on several data collection techniques, enterprises have been labeling employees as knowledge “spreaders” or “bottlenecks;” as “loners” or “connectors;” as “influencers” or “followers.” Why are firms doing this?

Challenge: Innovation that spurs organic growth is the most difficult challenge that large firms are facing in the last 15+ years. Specifically, firms realize that they need a cadre of seasoned innovators and internal-entrepreneurs (intra-preneurs) to spur innovation and organic growth. Unfortunately, except for a few, the majority of firms are struggling in their innovation efforts as well as fostering a culture of innovation where these innovators and entrepreneurs can thrive and flourish.

Innovator’s DNA: What makes innovators different? How do they routinely come up with great ideas? How do they think and act? What is their mindset? What are their behaviors? Research shows that great innovators and successful and serial entrepreneurs demonstrate five key skills – Network, Observe, Question, Associate and Experiment.[1] Firstly, they are great at networking – meeting people from diverse backgrounds and skills. They immerse themselves into situations that expose them a variety of perspectives. This in turn helps them to sharpen their observation, questioning and association skills. When thrown into uncomfortable and unknown situations, most of our senses are in a state of heightened awareness. Hence, intense networking helps innovators and entrepreneurs to become good at observing and listening; especially, they do so without prejudice. Immersion and interaction with a diversity of situations propel them to constantly question the status quo within their own areas of expertise or specialty. They are constantly trying to improve and change things for the better. This questioning leads them to associate, copy and relate ideas and experiences across functions, industries and arenas; leading to possible new ideas and solutions. Finally, innovators are great at experimenting, exploring and testing their new ideas and solutions. They just don’t talk about it. They take the initiative to test if their ideas are in fact opportunities.

Innovation and organic growth within large firms is about routinely identifying great opportunities, shaping and developing them and then capturing them. For large firms, these great opportunities lie at the intersection of disciplines, functions and/or geographies. As seen in the Innovator’s DNA discussion above, we know that great ideas and creativity happens by associating and merging disparate streams of knowledge. However, association and new opportunities emerge only when there is a lot of networking among the different disciplines and functions of their large enterprise. Networking leads to better observation and listening and that in turn drives curiosity and questioning of the status quo. Creativity can be highly individualistic. However, organic growth which is the result of innovation is still the result of a lot of collaboration within large enterprises. So you see firms are desperately trying to force networking and collaboration among employees; and trying to measure it.

Innovation is knowledge work. Unfortunately, knowledge work cannot be treated and/or captured the way we have captured manual work. The traditional ways of measuring manual productivity is more than 100 years old. It goes back to Fredrick Taylor’s scientific method on manual work. It was about defining the task, defining standards, measuring against standards, focus on quantity and minimizing worker costs for a task through command and control structures. However, we live today in Peter Drucker’s Knowledge world. Drucker knowledge worker as against Taylor’s manual worker is much more focused on understanding the task, continuously learning, teaching others and innovating. Ideally, the employees focus on quality of work, they are treated as assets and not a cost and they work in environments where there is great autonomy.[2]

Further, there are more differences between manual work and knowledge work. Manual work is visible whereas knowledge work is invisible. Manual work is highly specialized, quite stable, has structure—definite process and outcome, and is about running known tasks with the right processes and fewer decisions. On the other hand knowledge work is holistic, always changing, has no defined boundaries of process and outcome, and is about uncovering the unknown by asking the right questions and making a lot of decisions.[3]

Hence, it will be quite difficult to capture knowledge work productivity using manual work productivity tools and methodologies. We need to invent new ways of capturing the knowledge worker productivity. Innovative firms have found ways to harness the knowledge worker in multiple ways. 3M has been doing this for nearly five decades, W.L. Gore for the last 40 years and Google more recently. They energize and engage their knowledge workers with a sense of purpose; enable them to master creativity and innovation in a climate with a great deal of autonomy.

Some questions to ponder: Will these high-tech wearable tracking devices help firms become more creative and innovative? Do they foster networking, observing, questioning, associating and experimenting? Do they transmit a sense of purpose, provide autonomy and enhance mastery?



[1] Innovators’ DNA, HBR, Dec. 2009

[2] Source: Reinvent Your Enterprise, by Jack Bergstrand

[3] ibid

Key Words: Strategic Change, Innovation, Risk, Uncertainty, Ambiguity, Prediction Logic, Creation Logic, Planning vs. Testing, Project Management, Agile SCRUM vs. Waterfall, Lean Startup, healthcare.gov, JC Penney, Lululemon, Georgia Tech, Coursera.

How Executives Get Fired

On Oct. 1, 2013 the much anticipated healthcare.gov went live. And, almost immediately, it crashed. Unanticipated surge in web traffic was blamed for most of the problems. Even those who were able to get through faced a multitude of issues and errors – confusing instructions, missing drop-down tools, unexpected hang-ups and puzzling design. Those who gave up and called the customer service reps didn’t fare any better. The reps couldn’t access the online market place either.

On Nov. 29th, 2013 JC Penny (JCP)—an original member of the S&P 500 since 1957—was kicked off the list for its sharp decline in market value. While JCP still has more than a 1000 stores and 2012 revenues stood at $17B, the historic 100+ year old U.S. mid-range department store has fallen on hard times.

In Oct. 2004, Myron Ullman, a former executive at Macy’s and LVMH was named CEO. Ullman bought in brands like Liz Claiborne and introduced mini-shops within the department store. By Feb. 2007, JCP’s shares had doubled to nearly $80; a 10-year high. The economic downturn hit JCP hard and by March 2009 the stock was trading at $14. In Jan. 2011 William Ackman, a hedge-fund manager who had built up a sizable position in JCP stock, was appointed to the board. Amid Ackman’s push for new leadership, in June that year, former Apple retail-star Ron Johnson was named JCP’s new CEO, replacing Ullman. Johnson arrived at JCP in Nov. 2011.

In Dec. 2011, JCP acquired 16% of Martha Stewart Living Omnimedia stock and planned to put “mini-Martha Stewart shops” in many of its stores by 2013. In Jan. 2012, Johnson introduced a strategy involving in-store boutiques and a pricing plan that eliminated the popular JCP coupons. Instead, it would have “Every Day,” “Monthly Value,” and “Best Price” strategies. Prices would also not end in 9 or 7, but on whole numbers instead. In May 2012, JCP announced a 20% drop in sales and a $163M loss in Q1. Suddenly in June, the Head of Marketing, Michael Francis, who had come 8 months earlier from Target, said “he was leaving.” He was blamed for the marketing messages that were not resonating with customers.

In Aug. JCP started rolling out the “Shops” strategy in stores. Simultaneously, an overhaul of the home department in 500 stores was also started. In Nov 2013, JCP reported a Q3 loss of $123M as sales fell by another 27%. However, CEO Johnson said the firm won’t diverge from the strategy he laid out. By the end of the first year of Johnson’s turnaround strategy, JCP had amassed nearly a billion dollars in losses and a 25% drop in its revenues. In April 2013, Johnson was fired and Ullman rejoined the firm as CEO.

In Feb. 2013, an online internet course offered by Georgia Tech and hosted by the leading online learning firm Coursera promised to teach 40,000 students how to create their own massively open online course. The online platform asked participants to sign up using Google Docs. When the crush of students tried to sign up the system crashed. According to Google, apparently Google Docs only allows 50 people to edit a document simultaneously. A small detail, that seemed to have been overlooked by the planners.[i]

In March 2013, the high-flying Canadian yoga apparel maker and retailer Lululemon had to recall more than $60 million worth of a women yoga pants for being too see-through. Within a month there was an announcement stating “product chief to exit.” The following month the CEO announced that she was “stepping down.”

As we all know, expressions like “to exit,” “stepping down,” and “spending more time with family,” are just euphemisms for getting fired. Why do CEOs and executives get fired? The #1 reason is Mismanaging Change.[ii]

Analytical vs. Emergent Strategies for Growth, Innovation and Change

In a press conference after Johnson was let go from JC Penney, Bill Ackman—who had pushed for Johnson’s hiring, said that Johnson deserved criticism for unleashing a series of pricing and merchandising changes without first testing consumer views. Other Penney insiders criticized Johnson for eliminating the company’s sales and coupons last year without a broad market test, a move that led to a sales slump. The key words to focus on, in the above two statements, are “testing consumer views” and “broad market test.”

In March 2013, six months before the healthcare.gov website went live, McKinsey was asked to do a risk analysis and to develop mitigating strategies. McKinsey submitted a 14-slide presentation to the White House by early April. I have pasted two key slides from that slide deck. The first slide is about “complexity” and the second is about how to manage “complex projects.”

A website like healthcare.gov is a massive and complex undertaking – too many variables and too many unknowns. When you are dealing with unknowns, one is talking about uncertainty and/or ambiguity. On the other hand, risk is about the “known” world – known variables with data from the past. You can calculate and estimate risk using analytical tools. When you know the variables and you have data from the past, one can analyze, predict, plan and then take action. Specifically, we can go into the future in primarily two ways – (1) Analysis before Action and (2) Analysis after Action.

Traditional approaches to minimize and manage risk in innovation and change management projects is by doing a lot of analysis before taking action. BHAGs (Big Hairy Audacious Goals) are announced with much fan-fare. Then, the future is approached by performing an environmental scanning (SWOT, STEP, Value Chain Analysis) and followed by the setting of a project plan to execute strategy. Trend lines are predicted based on IRRs and WACC or projected cost benefits; KPIs and milestones are set and budgets are allocated. When project performance does not meet projections, money and energy is spent to get the project back on to the predicted trend line. Unfortunately, heads roll when the predicted future fails to materialize after a couple of tries.

Below: McKinsey’s exhibit demonstrating the magnitude & complexity of the healthcare.gov website project.

 McKinsey picture 1

This approach to change and project management makes a bunch of assumptions: (1) all process and outcome variables are known and can be accounted for ex-ante, (2) existing data from past projects can be used to predict the process and outcome of this project, (3) some variation to projections can be accommodated along the way using managerial judgment, and (4) failure is not an option.

This concept of going into the future is called “predictive logic” and the method is called “analytical strategy.” I just call this the BIG BANG approach to change. Most large firms, governments and institutions predominantly still prefer this mode of going into the future. I call the firms, organizations and individuals who principally use this strategy of going into the future as “PLANNERS.”

On the other hand, all innovative and complex change management projects have a number of unknowns. Specifically, there are two types of unknowns – known unknowns and unknown unknowns. Uncertainty is about known unknowns. In these situations, you know which variables may impact the process and outcome of the project but there is no data from the past to assign probabilistic numbers. Ambiguity is a second order uncertainty. One cannot surmise as to what variables may be lurking in the background. They only appear once the project is underway. Unfortunately, analytical strategies do not account for these unknowns ex-ante. So, when there are a number of unknown variables, most analysis and hence prediction of outcomes a priori becomes a futile exercise.

In the presence of unknowns, the way to manage projects is drastically different. Seasoned entrepreneurs, innovators and VCs test their ideas for potential opportunities predominantly through Analysis after Action. They Think Big, but Start Small. They start several small projects to test their hypotheses. They prototype rapidly and try to establish proof of concept by quick feedback from the market – voice of customer, voice of technology, voice of supply and voice of demand. They try to fail fast, fail cheap and fail smart. In doing so, they learn quickly by uncovering hitherto unknown variables and/or create data where there is none. With this new knowledge they refine their hypotheses and business models. They iterate this process of prototyping, failing, uncovering unknowns and establishing a viable business model. They pour in more resources only after a positive proof of concept has been established and the successful business model is replicated and scaled slowly. I call this approach to going into the future as START SMALL; as against the BIG BANG approach described previously. I call the firms that employ this technique as “TESTERS.”

In 2009, this method of going into the future was termed “discovery driven growth” by Rita McGrath. Decades earlier in 1978, Henry Mintzberg termed this as “emergent strategy” as against “intended or deliberate strategy” (analytical). In the mid-1990s software developers started using Agile Scrum (iterative emergent techniques) vs. the traditional Waterfall methodology (sequential analytical techniques) based on the work of Takeuchi and Nonaka in the mid-1980s. Most of today’s “Lean Startup” (Steve Blank 2012, Eric Reis 2011 and Ash Maurya 2012) concepts, principles and frameworks profess this very same emergent strategy. At Babson, our entrepreneurship and innovation faculty has been teaching this stuff for decades.

To summarize, PLANNERS usually follow the traditional BIG BANG approach that is characterized by the following sequence: Analyze > Predict > Plan > Act > Full Scale Launch. TESTERS on the other hand follow the START SMALL approach that is characterized by the following sequence: Design > Build > Test > Learn > Redesign > Scale Slow Launch.

Below: McKinsey’s slide that contrasts, the Start Small “emergent strategy” technique predominantly used by “Testers” (on the left) vs the Big Bang “analytical strategy” technique usually used by “Planners” (on the right).

McKinsey picture2

I am not suggesting that one approach is good and the other is bad. The right question to ask is: when do you use analytical strategies and when do you use emergent strategies. The Big Bang approach to change or project management works very well for version 2 or 3 of a product. For incremental innovations and for the known world – known technology, known products, known customers, known business models etc. – when we have lot of data and prior experience, the Big Bang approach still works very well. Unfortunately, in the unknown world – unproved technologies, unidentified customers, untested business models – and for radical innovation or major organizational change the Big Bang approach fails miserably. The Start Small approach works much better.

Unfortunately, healthcare.gov chose the Big Bang approach. At least 5 months prior to launch, McKinsey’s warnings were quite  clear, “…there was scant time to test the system before launch;….there wasn’t enough testing and revision;….create a Version 1.o before full launch….”

I have elaborated on these the Start Small concept in a previous blog as well:


Have a great holiday season!


[i] Crash sinks course on online teaching, WSJ, Feb. 4, 2013

[ii] Why CEOs get fired, by Mark Murphy, Leadership Excellence, Vol. 22, No. 9, 2005


Get every new post delivered to your Inbox.

Join 1,294 other followers

%d bloggers like this: