innovation
At the SXSW festivities this year, I moderated a panel on Innovation for NPR's Digital Day on March 8th. Panel included John Keefe - Senior editor at WNYC, Nico Leone - General Manager at KCUR, and Shazna Nessa - Deputy managing editor at Associated Press. I am grateful to the amazing panelists, who at my consistent urging shared stories of taking risk and bringing about change at their respective organizations. As you can imagine, change is not always easy at these sorts of places, and I'm enamored with the stories that smart people who can bring about change always seem to have.

The new ideas and experiences these three made real are quite different from one another, yet there are patterns to be found here; for example, in leveraging small victories, having big dreams and believing in your people.

The Awesome Foundation is a simple idea. We support people doing awesome things in the world. Every month we give out a $1,000 of our money to an idea we think is awesome and should be released upon the world.

Yes, but what do you think is awesome?
Awesomeness is more the product of a creator’s passion than the prospect of audience or profit. Awesome creations are novel and non-obvious, evoking surprise and delight. Invariably, something about them perfectly reflects the essence of the medium, moment, or method of creation. Awesome things inspire and attract.

Here's how we support more awesome:

  1. You apply by writing a few sentences about your awesome but unrealized idea. There are absolutely zero restrictions on who can apply and what sort of idea could win.
  2. If we like your idea, we give you $1000. Possibly in a brown paper bag.
  3. There are no strings attached or hoops you have to jump through. Of course, we hope you'll execute on your idea, but, you know, whatever.

Lots of people have been asking to find out more about the Awesome Foundation. Here's some background.

Every day brings an avalanche of new ideas and novel creations to the web, from witty t-shirts and viral videos to innovative methods of collaboration and powerful new software. The creation of unique and interesting things is not new, but the current surge in individual creative activity and its subsequent high visibility on the web is unprecedented.

The most compelling of these creative products I have been referring to as The New Awesome, and they represent a tiny portion of the total creative output. Historically, the word "awesome" might have been used to describe the power of a tornado or the grandness of a majestic vista. Today, the word is more often used to qualify the ingenious or impressive products of personal creativity, such as using hairspray to launch a potato 200 yards, hosting a talk show in Halo 2, or mocking the Kansas school board’s ruling with an ingenious take on religion.

But there’s more to the New Awesome than merely creative flair. The most interesting and, well, awesome creative products seem to share some common characteristics:

  1. It is novel and non-obvious
    Nothing like it has really quite been done before. Whether a clever approach, an unforseen bending of the rules, or just a commitment to excellence far beyond the expected, the New Awesome never fails to evoke surprise and delight.
  2. It emerges from passion, without the prospect of audience or profit
    From the first encounter, it's clear that the creator felt compelled to make this. Recognition or revenue is icing on the cake.
  3. It is initially under the cultural radar
    The New Awesome invariably emerges from the depths of the long tail. While the creator might be previously known for their creations, your mom has never heard of them.
  4. It captures the essence of the medium, moment, or method
    For something to truly stand out in the sea of creativity, the creator needs to tap into something true and magical. Don't ask me to define it, because I can't. In the words of Justice Potter Stewart, you know it when you see it.
  5. It evokes passion, community, like-minded behavior, and the insatiable desire to pass along
    The New Awesome is meme fodder. From it springs a thousand remixes, knockoffs, spinouts, and analogs. People gather around the hem of awesome.

An interesting result of this creative surge is the rising importance of effective discovery and distribution of the best creative products. In other words, when there is a rising sea of mediocrity, how do we find and highlight the very best? Alas, this will have to be the subject of a future post.

everyblock.com crime map

The term "Hyperlocal" generally refers to community-oriented news content typically not found in mainstream media outlets and covering a geographic region too small for a print or broadcast market to address profitably. The information is often produced or aggregated by online, non-traditional (amateur) sources.

Hyperlocal news is conceptually attractive because of its perceived potential to rescue struggling traditional media organizations. Most attempts at hyperlocal news websites have not proven to be entirely successful. Hyperlocal appears attractive to traditional media organizations for the following reasons:

  1. There is a perceived demand for news at the neighborhood/community level. The costs of print production and distribution have historically made providing this unprofitable, but the lower cost of web distribution could be used to serve this need.
  2. In an online world, regional media outlets are no longer the gatekeeper of news content and therefore must rely on their geographic relevance to provide unique value. Hyperlocal news leverages geographic relevance.
  3. The rise of citizen journalism and Web 2.0 seems to suggest that users could provide the majority of local content, thereby reducing or eliminating staffing costs.
  4. Local online advertising seems like a promising and not yet fully tapped revenue source.

History & Approaches
Hyperlocal seems to have emerged as a popular concept in 2005, even while regional news websites and blogs were already becoming common1. In 2006-2007, the first significantly funded hyperlocal sites and platforms were launched. There were high-profile failures, most notably Backfence.com (2007) and LoudounExtra.com (from Washington Post in 2008). Many early efforts took the form of online newspaper websites, employing local reporters (or sometimes bloggers), and attempting to source user-generated content by inviting individual submissions or incorporating user discussion functionality. There was much speculation on why this approach often failed. Regardless of the specifics, their universal unprofitability suggests that producing a local newspaper-like presence simply doesn’t create enough demand (online readership) to justify the costs (local staff). Of note are a few surviving examples like the Chicago Tribune’s Triblocal project that create and distribute hyperlocal print editions from their online content, and many hyperlocal blogs which operate on less auspicious budgets.

Around the same time, a slightly more promising wave of information-heavy regional news sites (such as pegasusnews.com) emerged. These sites were inspired by the success of regional review sites such as yelp.com and Yahoo! Local and in response to the high costs of local content production. These new efforts focused on incorporating dynamic regional data, such as crime stats, permit applications, real estate listings, and business directories in lieu of an emphasis on hand-crafted local reporting.

A third and perhaps most promising wave of local news sites emphasized the aggregation of third-party content. These include platforms such as outside.in, topix.com, and everyblock.com – all of which are framework approaches - aggregating content, mostly through RSS feeds, for many geographic locations (in some cases thousands) in order to build enough accumulated traffic to make a local business model work. Some slightly different takes on this model have individuals in specific locations acting as editors and republishing aggregated content (universalhub.com) or aggregator sites focusing on particular types of content (Placeblogger.com).

Lessons Learned
You can’t serve online users the same way as newspapers or broadcasters serve regional audiences. The news and information demands are wildly different. It is not enough to reduce printing and distribution costs or put content into "bite-sized" pieces. The user-consumer is trying to solve radically different problems from a unique perspective around their online information needs.

Giving participatory tools to users does not make them publishers. Users do not produce material that looks anything like mass media content. Users have an expectation of being involved, and their efforts (such as sharing) can be helpful or even necessary in some contexts. However, assumptions about traditional publishers shifting effort "to the crowd" are misguided. Users are also notoriously fickle in their socially-driven motivations. Our understanding around what motivates people to participate online and in what context is limited.

Manually producing local content is expensive. This isn’t a surprise. What shocked people is that there is not enough consumer demand online to justify this cost.

Aggregation is cheap, and if done effectively can create enough demand to be profitable – particularly across many locations. As more sources make their content available through RSS feeds and APIs, this is only going to get better.


1To be clear, the hyperlocal hype from traditional media organizations took fire in 2005, but local sites like Craigslist and H20Town were long-standing successes by this point, thereby playing their part in fueling the excitement.

In the early part of the 20th century, radio programs reached national audiences through newly-constructed radio networks. For the first time, mass media news had a human voice – and later with television networks, a face. This drove networks to develop trust as a human asset, and news anchors cultivated personalities that you welcomed into your home and returned to again and again. Over the ensuing decades, we stopped relying primarily on our friends and neighbors to learn about what was going on in the world and instead looked to a few critical human voices that were trusted without question.

This trust began to unravel in the late part of the 20th century. News media fragmented into biased channels, public opinion of reporting eroded around clashes with the federal government, and shifting advertising revenue and downsized newsrooms led to highly visible gaps and gaffs in a previously trustworthy and consistent news reporting environment.

Meanwhile, the Internet is helping consumers become increasingly savvy about media, and new expectations around participation and transparency in information delivery is emerging. In this new environment, a singular voice of the news ceases to make sense – except perhaps when John Stuart mocks the system as a whole. Online tools are enabling collaborative and person-to-person communications as potentially more reliable and trustworthy mechanisms for getting news. Individuals now capture the news on their cell phones, deliver the news on their blogs, and share the news through social networks.

Perhaps news is no longer presented as a single story, cobbled together by a single agency and delivered through the mass media by a single voice. What was once a single story now becomes interpreted and conveyed by a range of voices through different formats, channels, and modes. As humans, we still build trust through human interaction and engage with stories that are delivered with emotion and conviction. Some stories are made more meaningful by our connection to the individuals telling the stories, and others because a fresh authentic human voice speaks to us. I believe we yearn for this raw communication as a method for getting our news and making sense of the information within – something that historically has not been present in mass media.

If the future of news communication is more humanistic and distributed, delivered by an array of authentic storytellers, where does that leave the traditional journalistic reporter? Their importance doesn't suddenly evaporate. What is their place amongst this array of voices? Are we now all journalists or do we expect the ones with the credentials to develop their own authentic voices? Both situations are currently happening in this new environment, as some bloggers are vaulted into mainstream public attention, and some existing journalists now craft their own blogs.

However, I think there's a third less explored role for journalists. A need that arises from an array of unknown, emotion-fueled storytellers who do not necessarily engender trust. The very nature of these raw voices will cast doubt on the validity of the underlying information. Journalists must help the information in their stories be valid, and the stories be trusted. I believe the secret lies in weaving together these new voices into a more cohesive whole. The time-tested role of editor re-emerges to perform this critical function. Perhaps the contemporary journalist wields new media editing tools like the traditional journalist wields a typewriter. The news is not delivered through a single human voice, but by collecting together the voices of others. The editor's red pen ensures the facts are preserved, underlying truths are revealed, and opinions are exposed. In this way, we get original voices, rich with information and authenticity. We are not led astray by their subtle biases or gaps in reporting. A new voice of the journalist emerges, crafting the news out of independent tellings, spinning the traditional piece on its head. Here, truth can be served in a compelling new way, and a variety of voices reveal new insights only possible through the stories of regular people.

Ned Gulley and Karim R. Lakhani presented The Dynamics of Collaborative Innovation (description, audio/video) last week at the Berkman Luncheon series. I had previously recorded and blogged Karim's similar, less detailed Open Innovation presentation back in May for the Berkman@10 Conference. Karim and Ned have been measuring various aspects of collaborative innovation around a programming contest that seems ideally suited for this purpose. I suspect that few real-world environments have such a good built-in mechanism for objectively measuring the strength of innovative contributions.
 
I found their insights into the differences between novel, game-changing submissions and incremental improvements particularly interesting. Within this programming contest, each individual user submission is objectively measured for performance against a desired outcome (e.g. algorithmic best fit), and the current best performing code submission is highlighted for all to see. This structure may create a problem in that innovative new approaches often do not immediately yield the best result as compared to an incremental code improvement. The social reward of being highlighted as the current best may encourage incremental improvements over novel approaches, potentially having the overall innovation outcome stuck in a local max.
 
Assuming that introducing novel ideas increases the chance of an eventual best outcome, then innovation environments like this might benefit from better incentives to reward novelty. Additionally, this contest environment has no inherent mechanism for identifying novel and potentially useful knowledge. Ned and Karim highlighted a specific example where a novel, under-performing programmatic approach introduced early was eventually adopted later in a programming contest and provided the conceptual foundation for the winning and final submission. Had this novel approach been overlooked, it is unlikely that the winning code would have performed as well.
 
I would argue that incentives designed to encourage the introduction and eventual sharing of novel information would prove useful, especially considering our human tendency towards only exchanging shared knowledge and withholding unique (and potentially important) knowledge in many social circumstances. Cass Sunstein explores this hidden profiles phenomenon at length in his book Infotopia.

Eric Von Hippel and Karim Lakhani spoke on "Democratized and Distributed Innovation" at the Berkman@10 conference last week and You can listen to the session here:

Von Hippel and Lakhani focused on internet-driven collaboration in user-led innovation communities. They point to some specific research examples including kite surfing, PostgreSQL, and the MATLAB collaborative programming contest

Over the last several years, these two (amongst others) have radically challenged the conventional thinking on who innovates and how it's done. Our understanding of the innovation process is at an interesting juncture, where open sharing and online collaboration has helped highlight the growth of user-led, community-driven solution design. This notion first gained popularity in 2005 when Von Hippel published a book on his research, inspiring me to first blog about democratizing innovation.

Description from Berkman:
Internet and the widespread availability of sophisticated digital design tools are radically changing best practice in product and service development. What was until recently a process concentrated within producer firms is now becoming democratized and widely distributed. This fundamental change has widespread consequences. What is the impact of these developments on innovation processes, business models, and government policies?

The following is a snippet from my recently published chapter from the book Collective Intelligence: Creating a Prosperous World at Peace. Over the course of the next week, I will post short segments to this blog. This snippet is on avoiding centralized control when inspiring individuals to particiapte online.

Designing Systems That Work
Decentralized peer production environments hold more promise in directing participatory systems towards collectively intelligent outcomes than the traditional approach of using centralized authority to drive individual behavior. The success of open source software development and wikis suggests that production environments based on autonomous individual action have the most potential for large-scale, enduring participation. These systems provide individual freedom and choice for interacting with resources and projects without any single authority dictating individual behavior or focus. It is precisely the individual's response to the freedom inherent in a decentralized system that triggers the desire to participate.

Words like “harness” or “leverage” used to describe value produced through individual participation signals a misguided perspective of centralized authority controlling participants. Seeing individuals as a ready resource to be wheedled and mined for value is, at best, a misunderstanding of how distributed production operates, and at worst, a setup to failure. Individually-motivated activity is the cornerstone of successful participatory environments, and presuming participation while undervaluing the individual causes contributions to evaporate. Cajoling effective production, dictating behavior, and exploiting contributions is inherently counter-productive to participatory environments. Empowering the individual creates beneficial outcomes and cultivates an environment where these contributions are most valuable. Since the best participatory environments exist to serve individuals and address their interests first and foremost, the heavy-handed, centralized actions or exploitation of participants corrupts an online collective environment irreparably. Ideally, participants develop a feeling of ownership over the environment, and providing such an atmosphere is indispensable to ensure the environment’s continuance.

Want more? Read the whole chapter Empowering Individuals Towards Collective Online Production, now freely available online.

In a long anticipated move, idea submit & rate engines are finally catching some meme-like popularity. They're certainly easy to build. In a follow-up post, I will tear them to bits for the flaws they introduce and the assumptions we make around their utility. They do poke at some interesting aspects of Collective Problem Solving. Here are a few:


A couple years ago, I wrote about Neil Gershenfeld’s cool MIT Fab Lab (fabrication laboratory). On Monday I was fortunate enough to join the Boston Dorkbot crew for a tour of the Boston Fab Lab. I’ve posted a photoset of a few machines. Pictured are three computer-controlled prototyping machines, including a room-sized router, a micro-milling machine, and a laser cutter. Missing from the photos is a sign/vinyl cutter, several non-computer-controlled tools, and a nicely-outfitted electronics workbench.

The mission of the fab lab is a noble one: to empower creative people to make things with the assumption that, well, we’re all creative. Exposing individuals to commercial prototyping machines encourages people to explore, learn and have a significantly wider range of choices – both in what we might envision and make, but also in how we view the world and imagine our role in its future.


Has news innovation stalled? The last decade has seen significant shifts in how news is created and delivered: grassroots publishing and online news aggregators have resulted in shifting advertising dollars and widespread panic in traditional mass media outlets. However, fresh approaches both in traditional media and in new media exploration has felt scarce as of late. Most of the recent thinking around news delivery involves slapping the latest social technology idea or delivery device onto a news outlet and calling it innovation. Or worse, a retreat into potential profitability through a focus on niche or hyperlocal audiences. Of course, some exceptions exist, but there is too much opportunity tied up in new technology and the shifting demands of the public to slow down the exploration of new ideas.


As you have no doubt heard, Radiohead skipped the middleman with their latest release, offering the album online for download. This resulted in an estimated $10 Million in profits from 1.5 million downloads over the past week, more than the first week sales of their last three albums combined. Not too shabby, particularly when users could set their own price for the download, including being able to set the price of nothing. Radiohead's VRM-like experiment did trigger a flood of promotion, but at the end of the day, fans still volunteered to cough-up an average of $8 per album. Gets one thinking, don't it?

We will undoubtedly see more stunts like this to promote online sales, along with an increasing willingness to go cheaper and wider by offering digital downloads for free. But I want to know, will we see more customer-driven price setting? It feels a little like a public radio pledge drive, essentially asking the customer at transaction time - what is this worth to you? Applying this sort of purchasing model to product transactions is one approach that the VRM project is exploring - where the customer takes a more proactive stance in setting the relationship terms with vendors. Doc Searls introduces this idea in the context of public broadcasting at the tail-end of a Berkman.tv video on the future of public media. Don't miss the first part of the video either, as Jake Shaprio takes on predicting the future.

Together with Karl Rexer, I published an article entitled Overcoming Innovation Barriers (pdf) in CEO Refresher and Effective Executive magazines. It was reviewed by Manyworlds.com and The Innoversity Network

''A growing body of empirical work shows that users are the first to develop many, and perhaps most, new industrial and consumer products.''
- Eric von Hippel

The idea that users develop great volumes of successful innovations is not new, but it is perhaps shocking in its implications. This idea suggests that our traditional view of manufacturers or entrepreneurs as the primary and best source of new ideas may be flawed. Are the billions spent on R&D misguided and only introducing limited innovations? Additionally, there appears to be a growing trend for users to freely and openly distribute their innovations (think open source). This won't help businesses relying on secrecy and legal protection to leverage their own innovative assets.

In his new book “Democratizing Innovation,” Eric von Hippel presents compelling evidence of how and why users innovate for themselves, and why they see many benefits in freely revealing these innovations. He points out that businesses that rely on innovation for continued existence (such as product manufacturers) should take note of these emerging trends and leverage methods for profitably working with user-driven innovation.

Eric von Hippel is Professor of Management of Innovation and Head of the Innovation and Enrepreneurship Group at the MIT Sloan School of Management. His new book “Democratizing Innovation” is available for download under Creative Commons License at his website: web.mit.edu/evhippel/www.

Syndicate content