Reflections on the passing of Walter Becker

In 1977 I was a precocious seven-year-old with an impressive record collection (built on the 13-albums-for-one-cent deal from Columbia House) and a crystal AM radio. Strange things happen when your brain matures before your body. You feel like an adult before your height or peer-group membership entitle you to adult status. You can attract the hostility of those peers and in response become cool and contemptuous, traits you’ll later need to outgrow. But the upside is being able to pursue adult interests with the freshness and energy of a kid. Whenever I got tired of baseball, tag, war, wrestling, and so on, I would retreat to my parents’ dining room and plant myself under the table with the crystal radio and its single earpiece. It was there that I first heard Steely Dan.

Screen Shot 2017-09-06 at 12.44.57 PM

The song was “Rikki Don’t Lose That Number.” I could hear the lyrics clearly, but the meaning was obscure. Who was Rikki? What was important about the number? The singer didn’t really sound like a singer, either. He seemed to be sneering more than singing. Moreover, he sounded completely and utterly knowing. Where had he acquired this perfect knowledge? How had he acquired this attitude? I was intensely curious. Alas, I had burned through the entire 13-album deal and lacked the cash to fulfill my record-buying obligations. (To this day, I wonder if deadbeat seven-year-olds ended up bankrupting Columbia House.) My curiosity about this Dan guy would have to wait.

I didn’t really start listening to Steely Dan in earnest until I was a junior in high school. That year my mother put my brother and I on a bus to Port Authority and the filth and chaos of pre-Giuliani New York. On the bus a derelict man with an eye patch and a motorized larynx told us to “Shut the hell up,” which we did. When we arrived at the station we were greeted by my cousin’s husband, a slight guy with zero athletic presence but plenty of New York attitude. We passed through a narrow hallway where a busker was playing a marimba. An aggressive panhandler asked us for money. “Fuck off,” snarled our guide. That was our brief tutorial in the attitude. Back at my cousin’s apartment, we ate Jamaican takeout and listened to Aja. The associations of urban chaos, third-world cuisine, and the music all stuck, and in our minds the Dan became the official soundtrack of the New York visit.  “That New York smartass music,” my brother called it.

There were other times and other associations. My jazz piano teacher, a sideman for Sammy Davis Jr., never mentioned the Dan, but as I learned about them he began to make a lot more sense to me. His den where I took lessons smelled like pipe smoke and was littered with books by Abbie Hoffman and William S. Burroughs. These references became part of my mental furniture and were coterminous with the band’s worldview. Then there was the retail job I had during my college summers in a print shop. “Peg” and “Josie” had made it onto the corporate muzak playlist, and I heard each in the store two or three times a day. They were the only songs on the CD I never got tired of. I also enjoyed the irony of hearing the slick and sharp-witted songs in the airheaded environment of a suburban mall. Steely Dan consistently pitched over everyone’s heads, and instead of saying they didn’t care what anyone thought about them, truly and utterly didn’t.

In late September 2001 I was living in Brooklyn and still shell-shocked from 9/11, a day I spent running around and hiding in Lower Manhattan. When I finally felt well enough to leave my apartment, I wandered into a bookstore in Brooklyn Heights and picked up Brian Sweet’s Reelin’ in the Years. For the first time I actually started learning about the band, and the more I read, the more intrigued I became. I loved the way they had used studio musicians, eschewed touring, and achieved fame on their own terms. I liked their cryptic, oblique tone in interviews and their coldness towards media figures who expected them to curry favor. I got deeper and deeper into the band’s catalog, especially the amazing run from Can’t Buy A Thrill (1972) to Gaucho (1980).

Many people do not like Steely Dan. A lot of women find them creepy and misogynistic—a charge that may have some merit. Others find the music too bland or smooth. When I saw George Carlin perform a few years before he died, he made a special point of mocking “the kind of people who listen to Steely Dan.” I could have been offended, but instead was just surprised — surprised that he’d missed the irony and humor that actually were on a par with his own. Somehow he had mistaken the muzak for the music. Such is the price of pitching over people’s heads, I suppose.

When I heard this week that Walter Becker had passed away I felt empty and sad. There are a limited number of hyper-literate, hyper-ironic Northeastern smartasses in this world, and when one dies, a part of me dies with him. I was at least glad I had seen the band live in 2015. Donald Fagen is still with us, thankfully. He has pledged to do all he can to keep the band’s music alive as long as he lives. It’s a much smaller gesture on my part, but as a fan and a musician myself, I will do the same.

Digital value capture: the case of Uber

 

interior-1881429_1920

Idle inventory doing its thing

By digital I mean the business landscape created by cheap and ubiquitous computing. By value capture I mean meeting demand, or creating and meeting a previously unidentified demand. There is endless hype about digital, mobile, big data, consumer engagement, and related topics. There are precious few common-sense explanations of how digital creates opportunities for value capture.

Opportunities for digital innovation are probably infinite – limited only by human inertia and lack of imagination. However, those infinite opportunities can be described in terms of more general patterns. By studying those patterns, we can fuel our imagination and accelerate the pace of discovery and innovation.

There is precedent for this type of analysis in a pre-digital context. In the 1950s, the Soviet engineer Genrich Altshuller created a system for classifying not just innovations, but types of innovation. It was called TRIZ, a Russian acronym for a phrase meaning “theory of the resolution of invention-related tasks.” Altshuller, who worked for a patent office, analyzed and described inventions in terms of the more general patterns of innovation they employed.

How does TRIZ work in practice? Consider the example of a hydrofoil boat. TRIZ analysis reveals that that the boat uses the principle “To compensate for the weight of an object, merge it with other objects that provide lift.” An engineer studying this principle might use it to invent something new – for example, a pair of marine binoculars that have a plastic foam casing so they will not sink if dropped in water.

To my knowledge, no one has created a formal digital version of TRIZ. But TRIZ-style analysis can help us understand how digital innovation happens and hence accelerate its pace. To demonstrate how, let’s look at a single, well-known and well-understood company: Uber. Uber and its competitors (such as Lyft) provide an alternative to a taxi service, by matching passengers with private individuals who provide rides in their own personal vehicles.

Uber may seem like a clichéd and obvious example of digital innovation, and a simple one at that. It’s just ride sharing, right? I don’t agree. On the contrary, Uber meets the economic – and psychological – needs of multiple stakeholders in a sophisticated and subtle way. Closer study reveals at least six patterns of value capture in its business model:

Taking advantage of mobile transceivers and cheap connectivity. Smartphones are not simply a new content distribution platform. They are transceivers that can send and receive diverse types of data in both active and passive modes. Uber takes advantage of smartphones as transceivers, using them to match ride-seekers and ride-providers in time and space in the real world. This point may seem rather obvious to digital natives or Silicon Valley VCs. But most of the non-tech companies I’ve seen building “mobile apps” view mobile as just another way of delivering content, or letting people perform tasks “on the go.” Few are taking full advantage of the possibilities provided by transceivers.

Monetize idle inventory. The special insight of Uber and its competitors was that individual drivers’ cars are a form of inventory – inventory with a very low rate of utilization. In a world where there was no way to communicate the position, usage state, or availability of vehicles, this insight was effectively hidden from view. But transceivers, in the form of mobile phones, completely change this situation. Now a vehicle owner can easily find nearby passengers who want a ride at any hour of the day or night. This subtle change makes it possible to monetize millions of vehicles that would otherwise just be sitting idle. The best part, from Uber’s perspective, is that it doesn’t have to own any of this inventory. It monetizes simply by managing.

Monetize latent skill. Driving is not a specialized skill. In the United States, over 85% of driving-age adults are licensed to drive, and 90% of households own a car. Like private car inventory, this pool of driving skill is chronically underutilized. Uber therefore has a vast pool of potential drivers to draw from. This broad supply keeps labor costs low, and (to the chagrin of taxi drivers in the US and elsewhere) weakens drivers’ collective bargaining power. The result is lower costs and a significant price advantage over taxis.

Simplify transactions. Uber greatly simplifies the process of paying and getting paid. The price of a ride is set remotely and cannot be overridden by the driver. The passenger cannot stiff the driver, because payment is automatic. Neither party has to worry about the passenger having cash, or about the driver’s card-processing system going down.

Reduce transaction costs. Uber does not require a dispatch. It does not require a third-party certifying board to issue taxi medallions. It is less likely to become a politicized racket like taxi oligopolies in many cities and towns – yet another reason that it is despised by incumbents. All of these factors remove middlemen and reduce costs.

Monetize resulting data assets. By tracking every transaction and every ride, Uber is accumulating a massive amount of data that it can use to expand its business adjacently into food delivery (as it has already done with UberEATS), or into yet-unforeseen completely new areas. This data asset also makes Uber a compelling partner for any business that would benefit from a superior understanding of travel times, route optimization, or other details of highly localized logistics.

There’s your “simple” example: six patterns of value capture right there in plain sight, but rarely discussed clearly in the business press, or in all the buzzy literature about digital innovation. We indeed need a digital TRIZ.

 

The transceiver and the field of digital competition

The future of business is a future of transceivers. A transceiver is a device that both transmits and receives. The term itself dates from the 1920s. It may seem odd to speak about the future using 90-year-old terminology. But the word “wireless” dates from that same decade – and it seems that the potential of wireless and transceivers is only just now being realized.

cellular-tower-1676940_1920

The concept of a transceiver is meant to provoke fresh thinking about digital environments and digital competition. We are approaching a state where computing and connectivity will be so cheap and so ubiquitous, that every person and every significant object will be continuously networked together. Our current terminology is inadequate to describe this state. The term “digital” itself, although I find it indispensable, is overly associated with marketing. “Internet of Things” (IoT) is too big and too vague, and doesn’t suggest specifics. (IoT reminds me of the guru’s assertion that “All is one” – ok, all very well, but what do we do now?)

Thinking of objects as transceivers, conversely, brings this strange new world to life. In the world we are entering, people and things will be outfitted with sensors, processors, and connectivity at all times. Both will effectively become full-time transceivers, each with active and passive modes of transmitting, and active and passive modes of receiving. This state will entail an unprecedented ability to measure, analyze, predict or redirect physical action.

It is this ongoing interaction with the physical world that separates “digital,” this age of the transceiver, from what I have elsewhere called clerical IT. From the perspective of this digital future, the “information age” will look like a primitive period of electronic bookkeeping. Gartner makes a similar point in its book Digital to the Core: digital business is “the creation of new business designs by blurring the digital and physical worlds.”

This blurring has three major categories of implications: transactional, analytical, and trust-related. The transactional implications are that we will be able to optimize and redirect action in the physical world more efficiently and effectively than ever before. The analytical implications are that every request and every response will generate a trail of data, which we will be able to study from the standpoint of the past (historical analysis), the present (real-time intervention), or future (predictive analytics). Lastly, the trust-related implications are that everything that occurs will be infinitely traceable and something we can later audit. We tend to focus solely on the dystopian aspects of traceability that threaten privacy. What we miss is that traceability defines and creates trust relationships.[1]

These are the dimensions in which the transceiver redefines the competitive field. Companies caught up in the last phase of Internet innovation – “Let’s put it online” or even “Let’s put it on a mobile device – are stuck in the assumptions of clerical IT. They are missing the broader implications of digital, as are those who believe digital is just another marketing channel.

[1] An Uber driver, for example, would be foolish to use Uber to pick up and rob passengers, since there would be a record of the driver, the passenger, their interaction, and the route taken. Surely some Uber drivers have already made this foolish mistake – but of course, inherent foolishness has never been a pure deterrent for everyone in every circumstance.

The dot-bomb and the origins of digital consulting

machu-picchu-768803_1920

In late 1998 I was working on an e-commerce project just off Union Square in New York City. It was an effort typical of the time. We were building an online store with hopes of taking it live before Christmas. The air was thick with possibility; the project, almost free of planning. Christmas approached and it became clear we would release absolutely nothing. It did not matter. The company doubled down. The Christmas party featured excellent catering and free shiatsu massages. Budgets were renewed and we soared into 1999.

In the middle months of the project, perhaps January or February, a new species of consultant began showing up in the open-plan cubicles and at meetings. They were from “Scient” – an odd, intimidating coinage suggesting both aesthetic flair and methodical precision. The Scient team were unlike the typical web-commerce oddballs who had opportunistically materialized in the mid 90s to give advice about a field that did not yet exist. They were more straight-arrow Big 5 types. They wore jeans and had minor hipster affectations, none of them particularly convincing. They had an ad agency vibe, like upper-middle class kids who had chosen edgy but not-quite-unsafe careers.

On a snowy evening a few weeks after the Scient folks started appearing, I crossed the square, entered a rehabilitated industrial building, and climbed the stairs to peer into their New York City headquarters. It was standard issue dot-com, a loft-like space with Herman Miller chairs and commercial poster art on the walls.

Back at the client, the budget continued to climb while the scope continued to shrink. In midwinter, desperate to release some sort of result to the public, we deployed what we had – which was, as one colleague summarized it, “a $20 million e-card system.” A series of guru-like frauds came and went, each promising to save the project, and then slinking out a couple of months later. I did what any sensible young man would do, and headed off to work for a bank.

I saw a couple of Scient consultants working at the bank, but within a year, they and their brand had disappeared. I wondered if I had hallucinated the whole thing. Within 18 months, 9/11 and the Enron collapse occurred. No one cared about dot-coms anymore. Everyone was talking about terrorism, war, and cybersecurity. The surviving dot-communists with any appreciable skills took corporate jobs, especially at places like Comcast in Philadelphia, where the cable giant built an entire floor of their new building in the familiar loft style, with wood floors, foosball tables, and other informal trappings to keep the structure-hating commies happy.

Scient wasn’t the only e-commerce consultancy that appeared and vanished. There was also Viant, which had the same founder as Scient and a name that, suspiciously, rhymed with it. There was also Sapient (which almost rhymed again), Razorfish, Proxicom, Diamond. Some of these simply collapsed; others were bought; some simply shrank and laid low.

All was quiet on the e-commerce front. It wasn’t until about 2005 that I started hearing about “Web 2.0.” Web 2.0 business plans began duly circulating. A new vocabulary of embarrassing new jargon spread. It looked like the dot-bomb all over again. This time, however, the thing didn’t collapse. It wasn’t precarious or arrogant enough to collapse. It seemed businesses had learned a few things from dot-com mania and were executing their projects more wisely and carefully. Iterative development had replaced the “build it and they will come” approach. Companies seemed determined to “fail small, early, and often,” almost to the point of embracing failure as a fetish. This attitude may have been mildly ridiculous, but in terms of effect, it was far better than the to-the-barricades attitude of high dot-communism.

While the reformed revolutionaries were iterating their way towards Web 2.0, taking their Long March Through the Institutions, I had retreated to the mainstream of management consulting, performing sober activities like program and change management, and providing tender advice to distraught executives. I found myself surrounded by colleagues with a taste for khakis and blue oxfords. And yet, the call of e-commerce, Web 2.0 – or whatever it would be rebranded next – still beckoned. If I’m being honest, I was bored.

At the apex of this boredom, a new generation of edgy, web-ish consultants suddenly appeared in my midst. My firm bought a marketing agency. Overnight, we had a new floor of cool kids at corporate HQ. Moreover, they were doing real things at real companies, riding the new wave of mobile, social, and data analytics. For a couple of years they didn’t really call their specialty anything. About three or four years in, though, they started calling it “digital.”

And so here we are. Look at Accenture, the Big 4, the major strategy firms right now, and you will find that almost all of them have an officially-branded digital practice. From one perspective, you could conclude this is simply one more fad in a series of fads. But I disagree. Instead, I believe we are working through the phases of a giant Internet hype curve. Dot-com mania was the “peak of inflated expectations.” But after the dot-bomb leveled everything, the real work began, and continues apace.

Digital hype and digital essence

almonds-805427_1920

Resistance to hype is not always wisdom. In his Autobiography, Benjamin Franklin recounts the story of a neighbor who refused to buy property in his city,

… for Philadelphia was a sinking place, the people already half bankrupt, or near being so; all appearances to the contrary, such as new buildings and the rise of rents, being to his certain knowledge fallacious; for they were, in fact, among the things that would soon ruin us.

Franklin nearly fell under the man’s melancholy spell, but alas, had already made his investments. He would be vindicated.

This man continued to live in this decaying place, and to declaim in the same strain, refusing for many years to buy a house there, because all was going to destruction; and at last I had the pleasure of seeing him give five times as much for one as he might have bought it for when he first began his croaking.

In the late 1990s I heard about a CIO (also a Philadelphian, it turns out) who insisted that e-commerce was “a fad.” He was half-correct. The term was a fad, but the thing itself was real. His deferred investments left the company playing catchup for over a decade and a half.

“Digital” certainly implies elements of hype. These include fashionable new ways of working –flattened organizational structures, open spaces, scrums and kanban boards. It also includes a new cult of youth; the inscrutable and difficult-to-placate Millennials, the “digital natives” who grasp the new realities at a level the rest of us cannot understand. (To put this conceit in proper perspective, watch this scene from A Hard Day’s Night). Digital means a certain style of product design, sleek and smooth, which looks fresh now but, like all design, will eventually look dated.

But hype should never be dismissed. It should be interpreted as signal, and studied for essence. There is a definite business essence beneath digital hype. It consists of related trends in strategy, marketing, and analytics. The underlying reality is a genuinely novel environment created by mobile, social, and the Internet of Things (IoT). None of these phenomena are fads. They are rather new environmental conditions that change the traditional relationship between business strategy and information technology. They transform technology from an implementation detail (clerical IT) to an intrinsic element of strategy. The greatest evidence of this shift is that the established strategy consultancies (McKinsey, BCG et. al.) have each created their own digital practices. Whatever you think of these firms, they are neither trendy nor easily led.

But why are mobile, social, and IoT so important? Why do they represent a tectonic shift in business, rather than simply one more trend in a long line of trends? I believe the answer is that they create a completely new field of possibility through the way they blend the physical and virtual worlds. A person with a mobile device is effectively a human data transceiver. An IoT device is a nonhuman one. APIs and service-oriented architectures enable free interaction between all of these transceiver-actors, interaction that creates massive amounts of data while meeting real objectives in the physical world. At the same time, we now have all the cheap processing power we need to analyze that data, improve the interactions, and meet those objectives more and more effectively.

These conditions change the primary focus of businesses from efficiency to responsiveness. It is not that efficiency is no longer important. But it is a solved problem of the industrial age. Basic efficiency is not the competitive differentiator it used to be. The world of human and nonhuman transceivers shifts the field of competition from efficient planning to responsive interaction.

Business strategy cannot possibly persist unchanged in this environment. The strategies of efficiency belong to the industrial past. Industrial-age strategy is represented by a pyramid: orders issued from the top and executed at subsequent layers below. Digital strategy is far closer to John Boyd’s OODA (Observe, Orient, Decide, Act) loop. Those who take best and fastest advantage of the human and nonhuman transceivers win. Those trapped in the assumptions of linear and top-down planning lose. Even if they are parroting digital hype, they are still missing digital essence.

Compete, innovate, or both?

Competing or innovating is the most fundamental strategic decision you can make. Confusing the two is a recipe for frustration and failure.

telephone-1822040_1920

Each approach dictates a specific mindset and attitude that ultimately influences every decision a company makes. Neither is “better.” A competitor is defined by struggle and the intentional embrace of that struggle. An innovator is defined by new or unusual thinking that upends the conventional logic of competition. These stances dictate strategic, tactical, and operational decisions and effectively become part of a company’s cultural unconscious.

Companies commonly assert that they will “innovate in order to beat the competition.” As an outcome this statement is altogether plausible. As a process, however, it is not. To say this is to confuse outcome with original intent and, more importantly, correct procedure. You can proceed from innovation to beating the competition in point of fact. But procedurally, starting with this goal clouds and dilutes the process of innovation. It means you have accepted the terrain and the stakes in a manner that will limit and constrict innovation. It mistakes a serial process (innovating in a way that ends up defeating the competition) for a fictional parallel one (achieving this outcome by making it the sole explicit goal).

Competition begins with the consent to enter a defined market space. The relationship to one’s opponents becomes the primary driving force in decision-making. Competing means I have accepted there is a limited-sum game worth participating in. It means I believe I have special advantages that will let me enter this defined space, “attack” and gain a valuable share of the business that is currently flowing to those competitors. It is, as Mauborgne and Kim put it, a “red ocean strategy” based on the acceptance of an arena where I can hit my opponent, but in which my opponent can also hit me back.

Innovation begins with a blank page. It entails an open starting relationship to the environment and all the players in that environment. It implies a complete reframing or restatement of a traditional business problem or even a rejection of the problem itself as immaterial. Innovation is a “blue ocean strategy” that avoids competition and looks for freely navigable spaces. The innovator may end up destroying many diverse types of “competition” incidentally as he or she creates a new space in the business environment. But defeating competition is an accidental outcome and is never conceptualized as the original goal.

Winning through innovation, again, must happen serially after the act of innovation itself. The point cannot be made strongly enough. The innovator does not accept the conventional definition of a situation or problem as a given. Instead, she reevaluates an entire value chain or environment, observes what has changed, and rethinks the conventionally-understood goals and values of all players. The result is a radically different concept of the value delivered, which may end up undermining many competitors, but not by design.

Uber is an outstanding example of an innovative service based on a fundamental redefinition of value. The traditionally understood value of a taxi ride is that it takes the passenger from Point A to Point B. Uber goes deeper, and focuses on the subjective personal experience of finding, riding in, and paying for a cab. It takes the most vexing parts of this process – difficulty finding the cab, worry about when a called cab will show up, concern that the driver won’t know the destination, and uncertainty about how payment will be accepted – and creates a single seamless experience that eliminates all four issues. It does so by leveraging the smartphone, a new environmental variable that makes the entire solution possible.

Ambivalence towards competition and a general preference for innovation represent the entrepreneurial tone of our times. At the far end of antipathy towards competition we have Peter Thiel, cofounder of PayPal and author of the vigorous Zero to One. Thiel, like the bygone monopolists of the Gilded Age, views competition as a fundamentally negative and destructive force – one that benefits almost no one, not even necessarily the consumer. As an engineer focused on the time and cost required to create innovative products, Thiel believes competition is an ideology within American business and emphasizes instead the importance of originality and even secrecy.

A superficially different but functionally similar position is held by Marc Benioff, founder of Salesforce.com. As a promoter, salesperson and connector, Benioff is focused on messaging and more specifically on the messaging that wins business. In Behind the Cloud, he counsels readers to “always go after Goliath” and recounts how Salesforce grew through a direct assault on the software industry – its “No Software” campaign and clever marketing attacks on Oracle and Siebel. However, Benioff’s “embrace” of competition is mostly rhetorical and hence deceptive. Salesforce itself is based on a radical rethinking of the software ecosystem and the assumptions of on-premise enterprise software. A world-beating product is the happy outcome of these innovations, but if Salesforce’s originally intent had been simply to “beat Oracle,” they would have been locked into a battle with the giant and would have lost on the much stronger competitor’s terrain.

Why do today’s entrepreneurs seemingly prefer innovation to competition? The answer is likely that they have a new platform on which to innovate, the wired world and the “Internet of things” forming around it. These conditions make the radical reframing required for innovation possible and plausible in countless situations. Entrepreneurs are using technology to redefine business problems, leverage others’ assets, and skim considerable value in the process, as Uber, Airbnb, and numerous others are doing. These types of innovations, like drug compounds in the pharmaceutical industry, are not infinite in number. Eventually we may find ourselves back in conditions where competition is the better, or perhaps the only, option for many businesses. Until then, the cultivation of the innovative mindset seems most likely to generate the growth, leverage, and strong profits that businesses naturally seek.

Why you can’t run your company with big data

By “you” I don’t mean “one.” I mean you. There are people out there running their companies with big data. But chances are you are not one of them.binary-797274_1920

In recent years data and analytics have become a business ideology. IBM runs ads during football games to tout how it is creating “a smarter planet.” A few choice anecdotes circulate, such as the one about how Target knew a teenaged girl was pregnant before her father did, simply by studying her purchase history. Executives want their companies to become “data-driven” in the certainty that this orientation will lead to more customers and greater profitability. Vendors promise magic results, and the promise of magic results is used to justify big implementations, featuring the usual cast of large vendors, global integrators, and confused clients, as well as the usual phenomenon of budgets gone wild.

Do not assume you are reading yet another piece attacking big data hype. I happen to believe that much of the hype around data is understated, and that those who dismiss big data as yesterday’s news are sleeping through the most essential phase of the hype curve. We are truly in a new era of unprecedented data availability and cheap processing power. This era will transform business. I am skeptical about neither proposition. What I am skeptical about is the ability of most traditional companies to participate in that data revolution, let alone lead it. There are sharp, data-centric players already flourishing out there. But who are they? The case studies seem to always be about the usual suspects like Google, Amazon, and some edgy startups, with some scattered examples from traditional industries thrown in to enrich the story.

Look around your own company and be honest with yourself. If you work in any sort of traditional corporation, you are probably not “leveraging big data,” and the odds are you never quite got a handle on your own small data either. Your company likely lacks three essential characteristics of a data-driven organization:

  • Data literacy, meaning a shared competency in mathematics and statistics that lets teams study, learn from, and apply data to real-world problems.
  • Flexible processes and systems which can incorporate changes rapidly in response to new information, in critical areas such as pricing, product development and the customer relationship.
  • A data-focused culture in which facts, information, and assumptions can be analyzed, debated openly, and ultimately used as the basis for action.

Rather than these qualities, the typical corporation more likely has what I call the three H’s: hierarchy, hunches, and hubris.

The archetypal data-centric company, Google, is a very different place than where you work. Not only does it hire engineers, mathematicians and computer scientists; it is run by them. Data literacy and an innovative mindset are assumed qualities of Google’s employees, which it calls “smart creatives.” The company is built for responsiveness: it is relatively flat and is an interlocking set of literal and metaphorical “services” designed to share and apply information and insight across product lines. It has a meritocratic culture in which the best-formulated and best-supported argument is supposed to win, no matter whom it came from.

This is not to say that Google always lives up to its ideals, that its methods would work in every industry, or that it is a perfect place to work. The point is rather that as a true child of the information age, the company is built to profit from data from top to bottom. Its competency with data is a first principle, not a feature grafted on after the fact. Conversely injecting plentiful data and analysis into a traditional company will not turn it into Google. In some cases applying these tools in an old-line organization will only cause frustration. Employees will begin seeing vast gaps between what they could accomplish and what they are actually permitted to do. They will gain insights that no one is empowered to act upon. And a fledgling data function may become nothing more than an elaborate internal science project, a curiosity that generates interest but does not drive real results.

Given that you are not a Google and not likely to become one, what should you do? Given that I do not believe this is a solved problem, I do not have any glib advice. But I will say that most of what I see in the analyst literature about creating a data-centric organization is not very enlightening. Recommendations to involve stronger business sponsors or institute more governance or roll out incrementally seem both elementary and inadequate. We need to start thinking more seriously about what a data-centric organization should actually look like. In that spirit, I have three suggested avenues for exploration.

First, combine focused data projects with process projects, with a goal to solving very specific business problems. Find improvement opportunities that could be addressed with better information. Identify, select, and begin governing the information needed to support that process. Simultaneously, engineer flexibility into the process itself so that the people who execute it will actually be able to use and apply insight to improve results. Recognize that both halves — the right data and the process flexibility — are equally important.

Second, build a data-literacy program within your organization, spanning levels and focusing first (of course) on the process areas in which you want to make initial progress. Don’t assume that all your staff are mathematically or statistically literate; but likewise, do not assume that they cannot achieve improved levels of competency. Educate them through a tiered program, much as companies have done for decades with Six Sigma and Lean. Make sure that a culture of data literacy takes hold and that data and information start being used to drive decisions. Highlight, reward, and publicly applaud cases in which this actually happens.

Third, bring data analysts and data scientists into the mainstream of your organization. Embed them into teams who can understand and act on the insights they generate. Incentivize them not to preach (which they can sometimes end up doing), but rather to get directly involved in discrete issues, study and improve critical processes, and share in the gains achieved. This type of approach needs to be governed (you do not want analysts working to improve one metric while harming another), but as a means of engaging and motivating analysts, it is sound.

As the sociologist Anthony Giddens argued in Modernity and Self-Identity, the strange thing about scientific method is that it tends to undermine certainty, not strengthen it. The old BI vendor promise of a “single version of the truth” is not only impossible, it’s irrelevant. A data-centric company does not need a single version of the truth; it needs tools, people, and culture that support a culture of learning and experimentation. Those qualities, not simply piles of data, are what might make you the Google of your industry.

Why most BI doesn’t have business value

This week I was reading the promotional materials for the upcoming Strata+ Hadoop World conference in New York City. I felt briefly excited about the energy, intensity, and optimism of the data scientists working in business today.

But then, I thought about all the BI departments I’ve actually seen operating firsthand in the real world.coal-1626368_1280

These departments have been in a range of industries from financial services to healthcare to telecom. They have varied enormously in terms of capabilities and sophistication. But they have had one thing in common. Not one has had a BI program I’d consider successful, in the sense that it consistently delivered analytic insight that was understood by its intended audience and that improved the performance of the parent organization. What I have seen instead is that most analysts are self-directed (with the randomness that implies), that most business people do not understand their work, and that most businesses would lack the flexibility to respond to their insights even if such understanding were suddenly achieved.

For the moment marquee data scientists attending hip conferences do not need to worry about this gap between expectation and reality. They are in the middle of an extended honeymoon period. The typical data science organization has budget and a few choice anecdotes to keep the money flowing. The reckoning will occur at some indefinite point in the future, but in the meantime, it would be foolish for data scientists to rain on their own parade.

But eventually, rain it shall. Gartner estimates that 70-80% of BI projects fail to meet business objectives. The Data Warehousing Institute reports that BI administrators, those closest to the tools and best-positioned to know how they are being used, estimate BI adoption rates of about 18%. These are not impressive figures and businesses will not continue to invest with these odds indefinitely. As anyone who has witnessed a major BI platform release can tell you: when a system goes live, it is often to a soundtrack of loudly chirping crickets.  An analytical system is not exactly forced on users in the manner of a transactional one. The target users of a BI system can engage if they see value; if they don’t see value, they can safely ignore it.

Why are BI projects failing so badly? The fault may be in the two most common approaches to building BI solutions. These could be characterized as “build it and they will come,” and “specify everything in advance.”

Build it and they will come – The hallmark of this approach is to collect every last bit of potentially relevant data, then simply stand it up for analysis. This is the classic approach to building data warehouses taken in the 1990s. To the shock and chagrin of anyone with a sense of technical history, it is being repeated today with “data lakes” which are simply large repositories of data in their native formats. The typical data lake is even more ungoverned and incomprehensible than the traditional data warehouse it is supposed to be replacing. It may be a repository of great insight, but it could just as easily be a repository of great nonsense. It is completely unclear how a business user operating at a remove from the data lake is supposed to tell the difference.

Specify everything in advance – This second approach is rare at the moment, given that improvisation is currently fashionable and that anything that sounds like waterfall development is unpopular. But it still can be found here and there. The “specify everything in advance” approach tries to anticipate every last use of an analytics platform and determine all the aggregations, reports, and even likely ad hoc analyses (sic) that users will likely perform. Predictably, projects run in this manner frequently get hung up in the requirements or modeling phase and run over budget before going live. If such a system does go live, the “exact” requirements used to build it turn out to be inadequate — users discover new needs shortly after launch, and the new system has a backlog of enhancement requests from its first day in production.

We have to emerge from the morass of these approaches.

The business case must die

graves-317566_1920

Time-consuming creation of traditional business cases is a waste of money and energy. In most organizations business cases are created by isolated teams, using their own assumptions and rules, then compared in order to prioritize investments. Put otherwise, they are used to settle arguments. Predictably, they end up being political documents, gamed front to back, and having little or nothing to do with weighing and judging the true sets of options in front of a business.

Innumerate organizations cannot produce well-informed business cases. High-quality business cases are probabilistic in nature; they lay out ranges of possible outcomes based on principles of actuarial science. The business cases most organizations actually produce are not even faintly like these higher-quality ones. Instead, they present a single “ROI” with an air of fake certainty, using static assumptions that have little or nothing to do with reality.

Even when business cases escape the simple numbing dimension of cash-flow analysis, they present “key metrics” in isolation without any consideration of their larger systemic implications. Goals like inventory reduction, faster claims processing, and so on may all make sense from the perspective of a departmental silo. But from the perspective of the broader enterprise, these “goals” may be worthless or even damaging. They are examples of what Goldratt called “local optima” — isolated objectives that may or may not accrue to the benefit of the broader organization. Inventory reduction is worthless if it leads to dissatisfied customers who can’t get what they need in a timely manner. Faster claims processing may makes certain customers happier, but it may also hurt an insurance company’s cash position or cause it to have to liquidate assets at an inopportune time.

Causal chains are complex and every single decision in a chain can have unintended consequences. Siloed interventions in a broader causal chain, even if they appear to be “positive,” will have no effect if they are undone or undermined somewhere further along the chain. A point solution that lets a company, say, package shipments faster, will have no net benefit to the company if there is not enough shipping throughput to get them to the customer faster. In theory companies that have been practicing Lean or Six Sigma for years understand these types of principles. But during the annual business case review, they are almost inevitably forgotten.

The single worst thing about traditional business cases are that they cause overinvestment in tactical improvements and underinvestment in strategic change. Cross-functional, strategic evolution usually cannot be justified with a business case. It is too broad, too sweeping, and taxes the ability of a typical business to analyze and understand a complex multi-variable problem. If a business says that all investments must be justified with a “clear business case,” usually meaning the traditional cash-flow analysis, no sane executive will step up and risk their career to back a complex transformative initiative. While the initiative may be utterly necessary and utterly valuable, the organization’s traditional criteria for making this judgment will be hopelessly inadequate.

Over time, acting on disjointed, one-off business cases, based on incompatible and unlike criteria, turns a company into an unwieldy Frankenstein: a collection of uncoordinated body parts, unable to move nimbly and weighed down by high, unforeseen costs of coordination. Departments develop their own point solutions and siloed thinking persists, now embodied in the practical reality of systems that are in place and which must be relied on for everyday operations.

The opposite of siloed business-case thinking is exemplified by Jeff Bezos’ famous mandate that all Amazon platform owners must develop and publish service interfaces. The mandate, issued in 2002, helped transform Amazon from “a website” into a coordinated network of data, insights, and capabilities, and of course a retail juggernaut. The “business case” for each API could never have been made or justified in isolation — the cost would likely have dwarfed the anticipated benefit in every single instance. Effectively, Bezos’ dictate amounted to a business architecture decision — a stated principle of interoperability that would support a vast number of unforeseen applications.

And business architecture, ultimately, is the framework in which the business case must be reinvented and re-formed. Local decisions should always be framed within an overall enterprise model, informed by systems thinking and a holistic consideration of causes and effects. American executives keep trying to capture the magic of systems-thinking companies like Amazon, Apple and Google. They need to realize that business architecture is the essence of these companies and that their reliance on the traditional business case is the principal obstacle to successfully imitating them.

SEEP: a strategy model for the age of big data

In business today we hear constantly about change: changing markets, changing technologies, changing competitive forces, and so on. But most of us still set and execute strategy using static models that don’t acknowledge the reality of rapid change or the growing importance of data. The dominant mental model of corporate strategy today is still a pyramid: a select few top executives setting goals, some managers translating these goals into departmental sub-goals, then employees working to achieve all of the above at management’s direction.

In an era of ubiquitous and easily accessible data, the fatal flaw of this model becomes obvious. Namely, it confuses organizational hierarchy, which is valid and necessary, with cognitive hierarchy, which is a command-and-control illusion. The strategy pyramid makes an organization deaf and dumb to new and emerging information that contradicts its official view of the world. It assumes that knowledge and insight flow from the top down, when in truth knowledge can and does come from everywhere.

screen-shot-2016-12-18-at-8-47-26-pm

As an alternative to the pyramid model, I’d like to propose what I call the SEEP model. SEEP stands for Scan, Evaluate, Experiment, and Perform. It is not meant to be a single, self-sufficient strategy model — I happen to believe there is no such thing. But it is meant to replace the dogmatic and brittle pyramid as the master model for framing a company’s goals and activities.

Instead of treating strategy as a top-down command-and-control endeavor, SEEP frames strategy as a constantly evolving set of possibilities to identify, examine, and act upon. This does not mean that strategy becomes purely fluid or an act of continuous improvisation. But it does mean that the organization continuously engages and negotiates reality and remains ready to make changes when they are needed and when they have been thoroughly tested. At each layer of the mode, the organization asks critical questions and answers them with data. Each layer clarifies and refines the organization’s understanding, imposing discipline rather than allowing rapid and unjustified changes in direction — the sort of top-down chaos a company experiences when its executives change direction without data or supported hypotheses.

The SEEP model layer by layer

What happens at each layer of the SEEP model?

Scan. In the Scan layer, the business focuses on emerging possibilities in the external world: new products, new markets, potential mergers, acquisitions, divestitures, joint ventures, and so on. It also looks for externalities like new competitors, changing customer perceptions, or fundamental redefinition of the competitive environment. The Scan layer ultimately produces hypotheses about what is happening in the external world and the actions the business should take in response.

Evaluate. In the Evaluate layer, the goal is to build a business case — not simply a set of pro forma financial projections, but a financial, cultural, and architectural evaluation of how the new action would affect the business. The evaluation should lay out potential scenarios and identify proof points that would show whether or not the actions were working.

Experiment. In the Experiment layer, the business uses pilots and proofs-of-concept to test the proposed actions in the real world. This principle of experimentation applies not only to new products and markets, but also to actions like mergers. If the theory behind a merger, for example, is that two companies’ cultures will be compatible and synergistic, this idea should be tested with a series of collaborations before anything is actually merged.

Perform. The Perform layer is where regular ongoing operations occur and standard traditional metrics apply. Is the business growing? Profitable? Gaining share? Generating cash? And so on.

The SEEP model is first and foremost a portfolio tool, a way of evaluating the things a business is doing and could be doing. It is not a standalone model for setting and executing strategy. It doesn’t show the push-and-pull of competitive forces that determine profitability (like Porter’s famous Five Forces), and it doesn’t show the networked relationships between a business, its customers, its suppliers and its value propositions (like the more recent Business Model Canvas). Its purpose is to orient leaders to dynamism and change, and foster greater openness to data in decision-making.