hyperlocal

Most of the blogging work I've done over the last several months has been around NPR's local news efforts.

In Beyond the Blog, I point out some interesting places where Gawker Media's new proposed redesign may teach public media a thing or two and perhaps suggests that we're on good footing.

In another post on Top 10 Challenges Stations Face in Adopting Local Continuous News, I go into detail around what we've learned as we introduce news blogging to our pilot public radio stations and teach them how to do it. Certainly some lessons here around new technology adoption and organizational change management (pro tip - it's hard).

And even earlier, I looked at The Pew Internet and American Life Project findings from their Neighbors Online report on individuals’ use of online tools. The big takeaway here is that people don't care less about local news, they're just shifting their attention to a wider array of sources and finding content via their social connections online.

Internet solutions appear wherever finding, connecting, and sharing information with others is expensive or difficult. This is especially noticeable when individuals with similar interests but insufficient proximity are finally able to connect. Unsurprisingly, there are now sites bringing together global interest in speaking Klingon, knitting food, and collecting cookie fortunes.

But what about deploying internet technologies for people who are near one another? Certainly this technology isn’t just about bringing together far-flung hobbyists – there should be unresolved information needs that exist at a local level, as suggested by the buzz around hyperlocal news.

In determining these information needs, we must resist the temptation to focus on what media organizations proscribe or what is currently vanishing from existing news outlets. Instead, we should look at routine communication barriers that can be dismantled by internet-based solutions. This is surprisingly difficult to do, since we often don't see the barriers we face or recognize them as unnecessary. In order to determine where technology might be best deployed to address local needs, we must find situations where individual members of local communities are actively trying to find, connect, and share information with one another. Then we can look more closely at the difficulties, delays, and expenses that might be eliminated or reduced through more tailored use of online technology.

Looked at in this way, it becomes clear that finding and connecting with others nearby to exchange our stuff (craigslist.org), meet around shared interests (meetup.com), and initiate relationships (match.com) have all been remarkably successful. But what about sharing local news? Success with local news has been less pervasive and straightforward. Arguably, this is because existing solutions have not yet fully uncovered the true needs and barriers to sharing local news.

Another method for determining what these needs and barriers might be is to monitor online tools that excel at supporting a breadth of communications. Within these tools, we might find clusters of people who share geographic proximity and are actively communicating. Identifying patterns in communications or locations here will reveal which local needs may be benefiting most from the reduced friction of online communication.

Interestingly, most social networking tools provide little of this local communication. Both Linkedin and Facebook, for example, seem to excel at connecting out of touch and geographically disparate individuals. Things have started to shift, however, with the introduction of the short messaging system, Twitter. With Twitter, people are starting to connect with one another simply because they are nearby. Twitter seems different in this regard, and understanding how Twitter is different might just be the key to understanding where frictionless local communication holds the most promise.

Twitter saw its first big explosion in usage during the 2007 SXSW festival in Austin, TX. This was in large part due to the attendee’s unresolved need to connect with others at the conference. Ironic as this may seem, as you move around an event such as a conference, you become a mostly passive recipient of information, cut off from explicitly sharing the experience with others. Communication needs at large events like this range from broadcast heckles to simple queries around where your friends are, what events are attendance-worthy, and who to get to know. In my own experience, this proximity-effect of Twitter carries over into day-to-day situations as well - it becomes valuable to follow someone simply because they live near you. But why?

I believe one answer lies in the immediacy of the information that is shared. Specifically, it is surprisingly difficult to share information about what's going on right now amongst people near one other. As with SXSW, local twitter messages (tweets) are most valuable when they contain information about what is happening right now – often something that might affect me because of our relative proximity. For example, I might monitor the tweets from those I follow locally to know where they are or where they’re going so that I can (presumably) join them. It’s valuable to find out about something as it happens. I can always visit a traditional news source if I need to seek out a specific piece of information or learn of important happenings after the fact, but who’s going to let me know of something important going on right now? It's this active nature of twitter, filtered by real people, providing immediately sourced, proximal information that makes it so valuable. Nothing seems to match twitter for a real-time assessment of what I need to know about that’s going on near me.

Perhaps Twitter points to only one unresolved need – the need for immediate, proximal information, but I believe this need will blossom into a more significant source of local news and take different forms as it more seamlessly encourages useful sharing.

WBUR Hyperlocal Discussion Following a recent post and discussion on hyperlocal news, WBUR was kind enough to let me initiate an open discussion on the topic during their monthly meetup at the station.

Around 15 people participated in this discussion, including Lisa Williams from Placeblogger, Ben Terris from Boston.com's Your Town, Adam Weiss of Boston Behind the Scenes, Persephone Miel from Internews Network, and Doc Searls from Harvard's Berkman Center. You can hear the conversation here:

The conversation covers a wide range of topics, including:

  • Trends and directions of hyperlocal news. Where the emerging opportunities might be.
  • What the user demand might be around hyperlocal news - where the current gaps are in addressing user needs.
  • The rising importance of immediacy and speed of hyperlocal solution deployment
  • The problem of scale and searchability around hyperlocal sites
  • How hyperlocal sites and the online-offline proximity connection might address the human need for social cohesion

WBUR Tweetup

On the evening of Thursday, February 5th, WBUR in Boston will be hosing their sixth (seventh?) monthly informal gathering at the station. WBUR regularly convenes the Boston social media community for the purpose of facilitating discussion around social technology and its growing role and impact on local community, news, and public media. All are invited to attend this free and open event. Details here.

At this event, WBUR has agreed to let me lead a discussion on hyperlocal news - in part due to the good discussion that's stemmed from this hyperlocal blog post and my interest in doing a follow-up on hyperlocal's future potential. Won't you join us?

Keep an eye on this blog for a follow-up from the event.

everyblock.com crime map

The term "Hyperlocal" generally refers to community-oriented news content typically not found in mainstream media outlets and covering a geographic region too small for a print or broadcast market to address profitably. The information is often produced or aggregated by online, non-traditional (amateur) sources.

Hyperlocal news is conceptually attractive because of its perceived potential to rescue struggling traditional media organizations. Most attempts at hyperlocal news websites have not proven to be entirely successful. Hyperlocal appears attractive to traditional media organizations for the following reasons:

  1. There is a perceived demand for news at the neighborhood/community level. The costs of print production and distribution have historically made providing this unprofitable, but the lower cost of web distribution could be used to serve this need.
  2. In an online world, regional media outlets are no longer the gatekeeper of news content and therefore must rely on their geographic relevance to provide unique value. Hyperlocal news leverages geographic relevance.
  3. The rise of citizen journalism and Web 2.0 seems to suggest that users could provide the majority of local content, thereby reducing or eliminating staffing costs.
  4. Local online advertising seems like a promising and not yet fully tapped revenue source.

History & Approaches
Hyperlocal seems to have emerged as a popular concept in 2005, even while regional news websites and blogs were already becoming common1. In 2006-2007, the first significantly funded hyperlocal sites and platforms were launched. There were high-profile failures, most notably Backfence.com (2007) and LoudounExtra.com (from Washington Post in 2008). Many early efforts took the form of online newspaper websites, employing local reporters (or sometimes bloggers), and attempting to source user-generated content by inviting individual submissions or incorporating user discussion functionality. There was much speculation on why this approach often failed. Regardless of the specifics, their universal unprofitability suggests that producing a local newspaper-like presence simply doesn’t create enough demand (online readership) to justify the costs (local staff). Of note are a few surviving examples like the Chicago Tribune’s Triblocal project that create and distribute hyperlocal print editions from their online content, and many hyperlocal blogs which operate on less auspicious budgets.

Around the same time, a slightly more promising wave of information-heavy regional news sites (such as pegasusnews.com) emerged. These sites were inspired by the success of regional review sites such as yelp.com and Yahoo! Local and in response to the high costs of local content production. These new efforts focused on incorporating dynamic regional data, such as crime stats, permit applications, real estate listings, and business directories in lieu of an emphasis on hand-crafted local reporting.

A third and perhaps most promising wave of local news sites emphasized the aggregation of third-party content. These include platforms such as outside.in, topix.com, and everyblock.com – all of which are framework approaches - aggregating content, mostly through RSS feeds, for many geographic locations (in some cases thousands) in order to build enough accumulated traffic to make a local business model work. Some slightly different takes on this model have individuals in specific locations acting as editors and republishing aggregated content (universalhub.com) or aggregator sites focusing on particular types of content (Placeblogger.com).

Lessons Learned
You can’t serve online users the same way as newspapers or broadcasters serve regional audiences. The news and information demands are wildly different. It is not enough to reduce printing and distribution costs or put content into "bite-sized" pieces. The user-consumer is trying to solve radically different problems from a unique perspective around their online information needs.

Giving participatory tools to users does not make them publishers. Users do not produce material that looks anything like mass media content. Users have an expectation of being involved, and their efforts (such as sharing) can be helpful or even necessary in some contexts. However, assumptions about traditional publishers shifting effort "to the crowd" are misguided. Users are also notoriously fickle in their socially-driven motivations. Our understanding around what motivates people to participate online and in what context is limited.

Manually producing local content is expensive. This isn’t a surprise. What shocked people is that there is not enough consumer demand online to justify this cost.

Aggregation is cheap, and if done effectively can create enough demand to be profitable – particularly across many locations. As more sources make their content available through RSS feeds and APIs, this is only going to get better.


1To be clear, the hyperlocal hype from traditional media organizations took fire in 2005, but local sites like Craigslist and H20Town were long-standing successes by this point, thereby playing their part in fueling the excitement.

Last week I posted a mini-app that helps find popular twitter users near you. Simply enter a location, and Twitterstars will search regional tweets and return the top five most-followed Twitter users.

Your Location (City, State):

I got some good sleuthing and feedback from the genius behind lolcode, and have subsequently made some updates and learned enough to provide some caveats. Tips & Caveats:

  • Since this app hits multiple web services, expect a little bit of waiting time as the data is retrieved.
  • If the page returns empty, this is likely because Twitter is struggling under server load or is rejecting API requests from Yahoo! Pipes (known issue)
  • I've locked the radius of search to 15 miles, which in most cases encircles users who put the city name you've searched for in their profile (twitter search API uses LAT and LONG coordinates). I have discovered some examples where the search API stumbles on stated locations, however
  • The Twitter search API returns a maximum of 100 tweets and must analyze users from within that collection. This means that if a popular user has not tweeted within the time window determined by the 100 most recent tweets (sometimes as little as a few minutes in the case of, say, NY, NY), then they will not be included in the search results. Try multiple times during the day to get different results.
  • The Twitter Search API is notorious for its latency. If you're trying to catch a very recent tweet in the result set, you generally won't be successful.
  • Pipes requests in rapid succession will return cached data, so it's not enough to simply hit refresh on the results page (sorry). Wait a few minutes and try again, or hack the URL to change the search radius or LAT/LONG, etc.

If you find this mini-application useful, please let me know. Suggestions for modifications and improvements are always welcome.

[Note: I've posted a Twitterstars update]

Finding and connecting with local social media 'superstars' can be a valuable short-cut for anyone trying to ramp up quickly in online social environments. These enthusiasts are knowledgeable about social media tools, are highly-connected, and understand well how to succeed in the online social environment.

But how do you find the local social media superstars? Today, many of these individuals use Twitter. The "Local Twitterstars" mini-application below takes any US geographic search area that you provide and returns a feed of the top five most followed individuals on Twitter who have been recently active in the region. Below is a more detailed explanation of how I built this mini-application. I also posted an update here.

Location (City, State):

Radius (in Miles):


This mini-application uses the Twitter Search API, the Twitter REST API, Yahoo! Pipes, and some simple HTML.

  1. The simple HTML form above constructs a server GET request through both hidden and user-populated form fields.
  2. This constructed URL queries a custom-built Yahoo! Pipe that takes the location from the URL and converts it to LAT-LONG coordinates.
  3. A Twitter search API query is then constructed by the Pipe using the LAT-LONG and radius data, returning the 100 most recent tweets in this region. Depending on your search area, this could include only very recent tweets or could span a much longer time period. Twitter has some internal smarts around matching the coordinates to include a variety of data that users put into the location field of their profile, including towns, zip codes, iPhone GPS coordinates, etc.
  4. The Pipe then takes all the tweets and constructs a series of queries to the Twitter REST API, pulling back user profile data from each user behind the tweets.
  5. After removing duplicates, the Pipe selects the top five most followed users in the list and builds an RSS feed presenting the username, a link to their twitter account, and the current number of followers they have.

NOTE: If the feed request is empty, try changing your search criteria. It's also quite possible that Twitter is struggling to handle load and won't fulfill the API requests.

If you find this mini-application useful, please let me know. Suggestions for modifications and improvements are always welcome.

Syndicate content