Twitter and Facebook

When I went to Twitter today, it displayed a dialog

We were hoping you could help us make it easier for people to discover
their friends and colleagues on Twitter. Review your settings below to
make sure the people you care about can easily find you.

asking me to update my name, bio, location and email fields.

This suggests that both Twitter and Facebook are insecure about each other, seeing strengths in the other and weaknesses in their own service. Twitter feels threatened by Facebook's focus on a true circle of friends and colleagues. Facebook feels threatened by Twitter's capacity for marketing and building followers in public.

It suggests they may eventually become very similar in the features they offer, with Twitter integrating photos, video, circles of friends and Facebook making their content more public (which they are doing). Perhaps both sites will give users more control about who can see what content.

Labels: , , , ,

Twitter outage creates panic

According to CNN, the twitter outage left users feeling as if they had lost a limb or left home without their cell phone. It is suggested Twitter needs competition to provide alternatives when an outage occurs, as they inevitably will. There were (or are?) a couple of alternative social microblog services available (is jaiku still running?). Of course, this won't help if multiple sites get attacked at once.

What would help is Google Wave. This outage is an incredible opportunity to demonstrate the potential for resiliency a "federated" or distributed social media system has. Content in the Google Wave universe is independent. Every user can have a copy of a bundle of posts, comments, content on their own device. Multiple copies can exist on different servers. It could be possible for a group to continue working, or at least work offline with their content, during an outage and then when connection is reestablished, the changes can be merged back into the conversation. This is what we can do with email now. We can read messages in our inbox (as long as it is not webmail) even when the network is unavailable. We can keep messages around as long as we want. We can write draft replies, take notes, resuse content through quoting or editing the text we recieve and at any time later, forward to others or send the revisions back to the sender.

What did I do during the panic? I just waited for Twitter to come back up. I only post once a day (if I am feeling up to it).

Labels: , , , ,

You can't just put content behind a blank wall

I caught a discussion of Newscorp's new plan to get users to pay for online news content. It will be difficult to sell news online because there are so many fragmentary ways to get the news for free. If any scheme for getting online users to pay for news does work, it just has to be easy. No matter what news sources online do, they must make paying for the news easy and transparent like iTunes. As easy as putting a coin in a paper box at the corner bus stop. The pricing is not as important as convenience.

Also, the customer must have a feel for the worth of the content before they buy or they must get a cheap bulk subscription so the content is cheap enough to take the irrelevant, incomplete, incompetent or useless with the relevant, complete, competent and useful content. I hate sites that put up a poorly written summary and a login or subscription screen. It breaks the rhythm of navigation on the web when a link leads to nothing. It stops you cold and punishes the user for following a link. It would be a sad web of balkanized content with links as obstacles. If content is to be shuttered behind closed doors, it must be quick and easy to open those doors with some kind of universal pass like OpenID connected to a micropayment system.

It started me thinking again about how to get online users to pay for content again. You can't just put content behind a blank wall and expect it to work. No one will ever find it, be able to search for it, search engines index it. Its not enough to provide a meta data summary like a bibliographic catalog does. Meta data will never be the answer to our search problems, at least not as long as humans are responsible for providing it. Nearly everyone ignores meta data, fails to include it, or includes incomplete or incorrect meta data. Who is going to keep all this meta data up to date? No, this is unworkable. Meta data must be generated automatically from content and that is subject to a high error rate using current technology.

The solution google books provides gets much closer to a real solution to the problem of hidden content. Instead of trying to describe the content using faulty and hard to keep up meta data, why not grant access to a sample of content? This gets much closer to a successful model for selling content online. When I read a book in Google books I get a random sample of pages around my keywords. Each user received a custom sample of content tailored to their interests and needs. In my experience reading a few pages of a book without restriction, as I would in a bookstore, gives a feel for the content. I am more likely to buy the book if it proved useful repeatedly over several searches. Yes, sometimes I find what I want in the sample pages, but I generally bookmark the source, take down the title in my notes and will cite the source in any work derived from the information gleaned for "free" which is actually a fair exchange I think for citation and a link.

I do not understand the hostility and opposition to Google Boooks. I am willing to pay less but buy more books in electronic form for reference purposes. If I find an interesting book in google books but it is not one I would pay $30 for a hardback I would pay $10 to download to my book reader. If I have to pay $30 for one book, it is going to be the one I value most and need the information most, which I want to keep around for a lifetime, not a casual read or reference work.

There are books I would buy on the reader as convenient portable references. I would buy more ebooks at lower cost to fill out my "search space" of texts on my ebook reader. If a book adds to the information I have available on a subject but only partially or tangentially, I can't afford a $30 hardback, but I can afford three $9 works related to my subject to add to the search space on the reader

An idea I had a long time ago, when I was wondering how to pay for hosting my first website, was the "vanishing page" model. This would work a bit like PBS where content slowly disappears unless readers pay a small fee to keep it available. The individual reader does not pay for each content page, butsimilar to donations to PBS, a small number of readers or viewers pays for free access by others (this actually gives the donor a feeling of superiority, if it were not for me...). Mechanically, the web page would be publicly available to all readers and search engines but a count of page views would be kept. Each time the page is viewed the number of views or days left would be decremented by some amount. A button to make instant micro-payents would be displayed ok the page along with a thermometer displaying how close the page is to being removed from the site. If enough people donate, days (or credits, it could be a ratio of views to donations similar to bitorrent) are added to the life of the page, if not, it is replaced by a summary and a button to start donating again.

What we need are ingenious "social engineering" methods to get people to buy content online, similar to the ones used to manage "soft security" on wikis. We need soft methods, like Google Books, which gives readers a peek into books that might interest them.

Labels: , , , , ,

Mixing Conversation and Story

I realize now the real problem I have been working on and off for ten years now is 'conversation' versus 'story', but particularly applicable to journalism. In a way, conversation and story are like oil and water, they do not like to mix. Yet, stories are filled with dialog, or conversations, so why is that journalistic stories cannot contain dialog? Well, when it is an interview, they do. So what we need is a network tool that seamlessly integrates conversation (interview, written dialog, transcript) with story (narrative, reportage, essay and analysis). It looks like Google Wave has the closest technology to achieving this flexible confluence of conversation and story, even the potential for our conversations and stories to be both mobile and distributed. If every smart phone adopted Google Wave, and given that it works similar to email, which mobile computing already provides and is a robust and well-known commodity service, it promises quick adoption avoiding any centralized monopoly.

I envision the same tool could be used by a reporter to do an interview (dialog) and for personal self-expression (dialog, like Twitter, only sharing little bits of information, such as links). An interview consists of dialog, little snippets of information associated by place and time. This has the form of Twitter messages, but a chat application is much better for doing an interview than Twitter, so some new mechanism must be created to accommodate flexible use, moving between story and conversation, between longer and shorter length posts, between collaborative and authored posts.

Labels: , , , , ,

Google Wave and Portable Social Media

A quick observation about Google Wave.

I wrote some time ago about the problem of social media losing its social context as it moves around the digital universe. I thought some mechanism should be created to enable the social context pertaining to a unit of social media to be portable, so it moves along with it. It appears that Google Wave associates the people who pertain to a document (the authors, editors, people with access to view or edit the content, etc.) with the content in a portable way, through its "wavelets" concept.

It seems possible to share or transfer a piece of collaboratively authored content across the Wave system and into other systems with its social context intact. If so, this is a revolutionary step in the evolution of information technology. It gets my vote as the first technology I've seen that truly could be called Web 3.0, as far as I'm concerned.

It would only be right, if you downloaded a image from such a Wave based system to your pc, that it would somehow preserve the social context, perhaps with XML sidecar or embedded meta data, like the EXIF standard for photographs. The content could be uploaded back into a Wave ecosystem with its social context intact, possibly even after local edits.

Labels: ,

By Twine or By Time?

I ran across an interesting answer in an interview about Twine:

[Nova Spivack] I think the above solution would work for this too. Basically you are asking for a new view of the content – not “by twine” or “by time” but “by popularity” or “by relevance to me”.

Notice the question being posed. What he is asking is, why don't you like the view our "intelligence" provided, why do you insist on these existing, simplistic views like by time or popularity?

The last is odd. "Relevance to me" is the primary criteria for all information I want to receive. Even if I don't yet know it is relevant, such as when a person I follow in Twitter shares something I've never seen before and would never have found on my own. Do you understand? Even that is relevant to me. Everything I want is relevant to me.

I understand what they mean though. They mean serendipity. Like overhearing a snatch of conversation in Twitter by seeing posts by friends of your followers, but who you do not follow. But it still is relevant to me, you're just increasing the chaos in my information feed. Perhaps what we need is a "volume control" on chaos in information filtering systems.

Moreover, I suspect that humans being humans, really want to order information in the ways they are familiar with, the way their brain was designed to process information through evolutionary psychology (hmm, this is a new kind of "design" process, contradictory to the meaning of design, but seems appropriate to say design, designed by evolution). The upshot of this is people still want to order things by time or popularity. What other measures are there than the one's we've known?

Authorship: When we buy a book because the author's name is on the spine or cover in 96pt type. We are buying authority.

Sharing. When we "hear it through the grapevine" from our friends. Another high trust information source.

Some finding aids are a form of recommendation, as when we used to go to the reference desk librarian and ask for a book on a subject. This is a kind of sharing.

Look at the role trust plays in gathering and accepting information. Yet, we trust the smartness of crowds (or at least the smartness of cliques) at Wikipedia. I use it all the time and find the information is always a good starting point, usually reliable for technical information.

With trust comes the opportunity for abuse of power. The power of authority to stifle innovation and knowledge, to be used for sustaining false views (think of how the view of the Amazon civilization by anthropologist maintained for a hundred years turned out to be completely wrong and opposite to reality, despite the application of the "scientific method" and mountains of "evidence" all chosen, selected by a reductionist process, which only knows what it measures, can only measure what it sees).

Labels: , , , ,

Twitter and the Principles of Illusion

It is worth noting the two guiding principles of illusion are "suppressing context" and "preventing reflective analysis" (according to Tufte, in Visual Explanations). The first applies also to the ubiquitous photographic image, nearly all of which appear without context. A situation that apparently few people find troubling. A good example of the phenomena is the iconic image from the Vietnam war of the Viet Cong operative being summarily executed by a village officer. The photographer who took the picture often wished he hadn't because of the damage the image did when used out of context (as was the usual case). Several iconic images from the Vietnam war were frequently presented without context. It was left up to the viewer to interpret and may very well be people at the time did not want to know the context, enabling them to press the image into service of their political aspirations or personal, psychological needs. Visual media is inherently weak at providing context.

The emergence of email, web discussion forums, short messages, video sharing, all network native forms of communication create an environment hostile to reflective analysis. What is needed to alleviate this trend is a movement akin to the "slow foods" movement, perhaps a "slow media" movement, asking people to slow down, consider context and think reflectively within a network information ecosystem. The content of a Twitter stream can be informative, but it can also be trivial, and despite its benefits, it does not encourage reflective analysis. I personally find a tweets (Twitter messages) are frequently a touchstone for an innovative thought, connecting me to something I did not know and probably would not have had someone not passed on an interesting web link or thought out loud. But it would still be nice to pull wisdom from the ether by capturing tweets in some reflective and expandable form.

Although not yet a visual medium, these concise messaging and blogging systems are most attractive to television journalists. A quick turn Twitter before the commercial graces many newscasts. These context free nuggets are ideally suited to a medium described as a "wasteland" and it troubles me that networked content has been so eagerly adopted by television news shows. It points out the need for reflection and context in networked short message content.


I have explored this theme before (see Twitter Wiki, "quick-slow" bliki articles previously). The question is how to accommodate the fragmentary, context free units of networked content and encourage expressions of context and reflection to balance them. It is a daunting task because people often do not see a need for context or reflection and are often unwilling to bother with the story behind a photograph or take time to expand on a short message.

We need to accommodate the uses for which short messages are legitimate and when they are beneficial (such as the conciseness they encourage...concise writing requires reflective analysis before posting, you must know your subject well to pare it down to its essentials and wordiness often just adds confusion...we must be prepared for abuse of longer text forms connected to short text forms). But also we must make it possible for reflection to take place. The "quick-slow" approach to networked content systems encompasses this. We can then turn the two principles to our advantage, by encouraging their opposites context and reflection.

Labels: , , , ,

Biological Construction and Networked Content Creation

The order and symmetry of biologically created structures, such as an egg or the human body, are expressions of how correctly those biological systems worked to construct the natural artifact. Biological organisms are collections of cells cooperating with each other. The order and correctness is an expression of the successfulness of the collaboration.

An egg comes out more egg-like when the biological processed working to make it cooperate and collaborate more correctly in its construction. I believe this has implications for the collaborative processes operating in networked software development and information science. The biological process of construction is inherently different than the one humans have inherited from their tool making and industrial heritage. What will we make of it?

Labels: , , , , , , , , , ,

Where are we going?

The issue of whether people should pay for forums or not came up on dpreview. With the current economy, I expect how to pay the bills will be a growing question for many web services.

The problem is with forums there is perfect competition. Anyone can setup a forum and run it for next to nothing. If one forum decides to charge a fee, the users can flee to another forum. The only reason they might stay is because of the audience. For example, photographers pay for to host their photographs on Flickr primarily because it provides a rich audience of people who love to look at still photographs. Flickr is the Life and Look magazine of our time, it is the revival of the great picture magazines, not because of its technology (that helped orient the site in the right direction to succeed, just look at the abject failure of Picasa to be social---too little too late). Flickr just happened to be where most people who like to look at pictures gathered, mostly because of its blog-like streams of every changing pictures and social tools. It is easier to pay a small fee to use Flickr (perhaps even to "read" it) than it would be to overcome the "capital" costs of changing sites. Flickr users have a lot invested in Flickr and it might just cost less to stay and pay than to move elsewhere. Besides, there is no where else to move. The closest thing I could see to Flickr would be for every photographer to put up their own photo blog software and then join photoblogs.org, which would become the "magazine" and "social hub." This is a distributed vision of photo sharing online. I used to wonder which would be successful. But it really was simple, Flickr did it all for you, some for free, a little more for pay, well worth it to promote your photography.

Despite the somewhat juvenile and absurd environment of Flickr with regard to art photography (you know, the dozens of people giving out "Great Photograph" awards to pedestrian, derivative and mediocre images mostly to promote themselves or because they are too young to know what a derivative image is), it is useful to professional photographers and art photographers because Flickr is where the eyeballs are. It attracts people who still love still photography, which in this age of video, is a bit of a miracle that anyone takes an interest in photography. However, photographs can make the world sit still long enough for people to pay attention, and that is a very similar experience to poetry, which at least in part, is there to draw attention to things. I've heard from professional photographers they get an order of magnitude more requests or work through Flickr than through one of the professional portfolio sites.

One reason, perhaps the principal one, Henri Cartier Bresson and other great photographers became well known, was through their images being published in the great picture magazines. When television came along, the picture magazines went into decline. Photojournalism began its long decline at this time, for the simple reason people could learn about their world visually through television, a more attention grabbing (the barrier to entry for television was lower, you didn't have to be intelligent to watch it, a good example where low barrier of entry is destructive to society) and free medium. Without the picture magazines it was no longer possible for a photographer of acknowledged artistic merit to become known and their images have significance in society. The audience was gone. Flickr reestablishes this audience.

So the question still stands. Will people in the future pay for their online content. Pay to create it. Pay to consume it. What is happening now? People are already paying to create content. They pay for a Flickr account with better tools. They pay for services to create graphics, three dee art, property in virtual communities. A few sites charge for reading content, but not many. But given human history and the recent past, when most content was paid for, in newspapers, books and magazines (except for tv), it seems reasonable to assume the free ride will be over someday.

There may be a tipping point when a non-pay site is no longer competitive. When most good content has gone to pay sites and the community of interest for that content willing to pay is consuming all they can (this is what happens with books and magazines today), the other sources will be driven out in a kind of perfect competition. The free sites will be filled with garbage and what passes for content on local cable access.

The network is not the old traditional world of libraries and publishers. It will be different. Project Gutenberg. Open source projects. The collections of enthusiasts sick and tired of the crap shoveled out by the traditional content and software businesses have taken it on their own to produce quality products where the marketplace would not or could not. This is an order of magnitude different than the pre-networked world, where people could not work together, providing little bits of effort or expertise to collaboratively create a cultural artifact. This is entirely new and we don't know where its going.

As an aside, the idea of tipping or donation comes up. Frustrated with no way to fund my original website, I considered taking a modern high tech variation on the PBS approach. I considered (in the 1990s) creating a content management system where each article would have a countdown timer displayed like a reverse donation thermometer. If you didn't contribute something to the article, it would count down, when it reached zero, the page would be pulled from the site. Of course, the ability to cache networked content presents a threat to such schemes, the wayback machine can regurgitate considerable missing content and so can the Google search cache. What about caching? If the Wikipedia were to dry up funding and blow away today, would its content still remain available in a myriad of niches around the network? On people's computers, disks, servers here and there, in caches? Would it evolve another life in a peer to peer environment? Will all information become distributed over billions of cell phones and have no location at all?

Labels: , , , , , , , , , , ,

Twitter is a 'starfish' enabler

Twitter is a 'starfish' enabler. It's what makes Twitter powerful and empowers those who use Twitter. It puts individuals at the center of the star.

Twitter friends (followers) are more like information flows you choose, organizing the flow of information for yourself and others, curating, editing, creating than other social network friends, which are more passive, something you collect or at most create a space to explore. This is because friends/followers bring content to you automatically. It is the flows of information resulting from following that make Twitter different from other social networks.

I didn't know much about Twitter when we started designing Farmfoody.org and thought it was something to do with short text messages on cell phones. I am currently integrating Twitter into farmfoody.org, after having considered a Facebook social feed model and finding it overly complex and confusing. We need as low a barrier to participation as possible. Farmers don't have time for complex systems, blogging, social feeds with posts and comments and threads and six dfferent types of publishing and bold and italic.

Neither do people standing at a farm stand with a bag of white corn tucked in their arm have time for complexity. It turns out that the social bulletin system we were envisioning two years ago exactly describes the information flows in Twitter. The way your friends (followers) tweets (messages) aggregate on the Twitter homepage is identical to how we envisioned messages from our users collecting on the user's profile page. In our bulletin system, all the friends of a user receive a bulletin, similar to the "homepage" on Twitter, creating an information flow. The only difference is bulletins are like craigslist ads and expire. That original requirement for bulletings to operate as classified ads with an expiration date, similar to craigslist, held us back. I should have looked into Twitter integration then, since we would not have needed to develop one of our own.

Labels: , , , , , ,

A Twitter Wiki

As the popularity of short, fragmentary messages grows, I have become concerned the public conversation may lose the capacity for thoughtfulness and reflection. At the same time, I would like to caution those who condemn Twitter or other systems based on micro content to not throw the baby out with the bath water. The long form newspaper article found in the New York Times or Washington Post contains a lot of material used to provide background for the reader, often at the end of the article. Not only is this text boring and redundant to the knowledgeable reader, it takes up previous space. The one thing the web is good at is connecting one piece of knowledge to a broader context of other pieces of knowledge. There is no sane reason to continue repeating background and further reading material in a long form newspaper article when on the web, a writer can simply link to the information.

The brief, concise texts of micro content can be connected to many other sources of information, some just as concise (a kind of "blizzard" of small pieces connected loosely) as well as to other longer, deeper and reflective sources. This loose, disjoint and connected type of writing is simply the network native way of writing and connecting information. It is beneficial, as long as both kinds of writing and forms of content are available and can be connected.

My concern is really with lowering the barrier of entry, enabling and encouraging those longer, deeper and reflective forms of writing. I recognize that there are benefits from shorter, more concise writing, which leaves redundant, expansive or source material hidden (properly) under a link or conntected through a network of tags or a network of people. Perhaps will will see fewer long texts divided up by headings and sections and more smaller texts connected together through search, tags and linkages into a variety of wholes, determined by the user's interests and needs.

About ten years ago, I was fascinated by the idea of a long text (article, book, etc.) entirely constructed of fragments, similar to the kind of texts you see posted on Twitter today, which could be freely rearranged similar to those magnets used to write poetry on refrigerator doors. I imagined that instead of writing a large text with a single coherent whole, they way books have always been written, the pieces of information on a topic could be combined to create a "book" in innumerable ways by rearranging those pieces.

It would be like taking all the paragraphs in a book, shaking them out on to the floor, and then allowing or enabling those pieces to be rearranged for each reader or interest. The pieces would be tied together by keyword or by search result and only lastly by links. I coded a small prototype application called Strands to test the idea, but work and life caught up with me and I shelved it. I was and am still surprised by the ease and rapidity with which people have adopted Twitter.

Not only are people using Twitter, despite the fragmentary nature of its texts, they are participating creatively in shaping the technology and usage of this kind of system based on fragmentary texts.

The use of tagging emerged spontaneously from the user base. Using "hashtags" brief texts can be connected to media, such as images and video, with the tag at the center of a network of content.

Also, I've noticed users are starting to fit the tag word into their text. Some examples are:

"Young Nebraska farmer explains how limiting direct payments would affect his #farm at www.nefb.org"
(Tweet from http://twitter.com/farmradio)

and

"farmanddairyGet four issues of #Farm and Dairy FREE! Click on the big promo on our home page: http://www.farmanddairy.com/"
(Tweet from http://twitter.com/farmanddairy)

At the heart of my Strands prototype were small texts connected by keywords. I wanted to create the lowest possible barrier of entry, so a user could create a keyword (essentially a tag, I called them "strand words") just by writing it into the text. In this system, what was essentially a tag was created by writing it (texts were scanned on post or edit for the presences of tags and any new ones added to an index), which is hauntingly similar to how people have started using tags on Twitter. They started out adding the tags to the end of a message, but have now begun incorporating them directly into the flow of text. I hesitated to continue working in this direction on Strands, partly because I expected people would find the tags sprinkled through the text troublesome.

My current interest is in providing tools or ideas that will encourage and enable a society addicted to short messages, however beneficial they may be, however native to the networked way of writing and reading in a connected fashion, to engage in greater contextualization and thoughtful reflection, to enable collecting some of the knowledge quickly flying by in the "Twitterverse" into slower, more reflective pools of knowledge, like eddies on the edges of a fast flowing stream.

The first tool I want to build is a "Twitter Wiki" enabling anyone to associate a text of any length with a Tweet and anyone to edit it. If I have the energy, I will post any experiments on my site or at least attempt to describe it.

Labels: , , , , , ,

Social Micro-blogging and bookmarking

It hadn't occurred to me until I saw it being done that social bookmarking and social microblogging are both popular in part because the create flows of information edited and curated by experts.

One good reason to follow the bookmarks of a user belonging to a social bookmarking site is simply it is a source of good information. The bookmarks ought to be high quality and relevant in the expert's topic area.

It makes sense to follow the tweets on the homepage of a user belonging to Twitter (or any microblog system), because they represent a selection, an inclusion, of edited and curated information for free, usually from an expert.

A Twitter homepage combines the posts from a user's followers, which amounts to multiple levels of curation. Suppose a number of people practicing organic farming create Twitter accounts and post information they feel is important. Suppose then an expert in organic farming, perhaps an editor of an organic farming and gardening magazine becomes a Twitter user and then follows the tweets of those practicing organic farming. Suddenly, this user's homepage becomes a fountain of curated knowledge on organic farming.

The same phenomena occurs (without the vertically integrated curation) on social bookmarking sites. A social bookmark system is an example of horizontally integrated curation, since many hands organize, but one result does not necessarily flow into another, progressively filtering content. What if you could follow another person's bookmarks and aggregate the bookmarks of all your followers onto your profile page?

The question presents itself: Where did I go wrong?

I had thought of bookmarking as bookmarking and blogging as blogging, each highly personal, one organizing content for an individual's own use and the other for publishing (in a mostly traditional way, I tended to scoff at the idea of blogs as conversations, but they are, just not as flexible and immediate as microblogs). What I missed about the social aspect was the flows of information they create, which are curated. I envisioned years ago the idea of users collaboratively organizing the content of a website, but that wasn't all that great an achievement, since that is essentially what wiki users have been doing from day one.

I thought about applying it to all the content, thinking perhaps I could Tom Sawyer-like, get others to organize my stuff, but my second thought was, who would want to do that? I think most efforts to get users to organize content will fail and I think most efforts have, except where the social ingredient is in the mix. Collaboratively edited social content sites for news, bookmarks, short messages, do work, but through self interest in the flows of information they create. They become platforms for self-promotion.

There is no incentive for you to come organize my site. Flickr allows users to tag other user's pictures, to essentially organize content for another user, just as I thought would be possible, but I see no evidence it is being used generally. Except for the Flickr Commons, where there exists an incentive to surround historically important images with context, to tell their stories so they will be preserved and meaningful for society, where local historians can demonstrate their knowledge and where self-promotion is possible through the organization of other people's stuff.

Despite seeing the social ingredient at work in wikis and despite seeing the essentially (and pioneering) social organization of the CPAN library, I missed its importance. It's importance comes from the curated flows of information created by the social organizing, editing, contextualizing with narrative, selecting and filtering, that occurs in social media systems.

Labels: , , ,

Twitter as curated news feed

When I follow another Twitter user, their posts (Tweets) are included on my homepage, which is public. This amounts to creating a kind of "newspaper" news feed of content "curated" (selected and managed by me). The problem with this, is for example, that with our farmfoody.org Twitter account (established primarily for communicating announcements to users) is that it could become a kind of "mashup" of farm-food related news by following Twitter users posting on those subjects. However, that would result in clutter and chaos, since there is no way to organize the flow of content onto my homepage.

What is needed, is a way to tag posts. It would be nice if posts could be tagged according to topic and each tag converted to a tab, which would separate the streams of information, so there could be a #farm and a #food tag (using the hashtags convention) and a Farm and Food tab would appear on my homepage, allowing readers to chose the topic they are interested in following. I suppose they could just follow the individual sources, but what is needed is a curated aggregation to enable Twitter users to follow an "edited" flow of information through Twitter (like Reader's Digest?, it might even be possible for Twitter users to give a thumbs up/dn vote on what content should appear in a particular flow or to collaboratively tag).

If I could categorize posts based on farm or food topics, it would be useful to me and my followers. If the users I am following could tag their posts, it would be very similar and I would be relieved of some work (of course, merely following does in a sense create the mix, but it is all jumbled together). It is not important who organizes the content as much as it gets organized with the least effort possible.

I don't know if anyone is working on something like this, but it seems rational that Twitter would be working on some internal mechanisms for organizing that flow. The hashtags solution appears to not be scalable, since it requires following their user account (in order to scan for tags), more of a prototype.

Labels: , , ,

User Curation of the Archive

We need to enable people to curate collections. This means blogging the contents of an archive, which can be as simple as a blogger selecting certain items (by surrogate, typically a picture but also 3D rotatable image or video) and posting them to the blog along with any caption available through the collection's online database. They don't have to say anything original to be useful. The basic requirement is that archives places their collections online, giving access to potential curators outside the archive. Curating is anything a user does to create context for the cultural artifact, commenting, annotating, writing that contextualizes the artifact (like wiki pages).

User curation of the archive helps people feel connected with the archive and its contents. Involving non-academics in the archives is important for the continued existence of an instituation and the collection, the value of which exists partly in the memories of people and in the objects themselves. I learned from genealogy that people only preserve what they care about, and that people care about things when they have meaning, and stories give artifacts meaning.

The context becomes a bigger and bigger net as it grows, bringing in more people from search engines. The effect of this net can be powerful, as I learned soon after getting on the web in 1995. I put a collection of family photographs from the middle 19th century online and within a few months several relatives of the people in the photographs had found them and made contact with me. After over 120 years of separation. This was when the web was very small, the users a very small percentage of the population. My idea to cast a net with the pictures and captions had brought in the catch I desired, helping to identify individuals in the images and reconstruct the family history, both photographic and genealogical.

I was intrigued by a project Social Media Classroom. It shares features with ones we envision for Folkstreams, as a platform for creating access to archives, but we also recognize that our site is mostly used in the classroom and that features of our site are shared with features of a classroom, such as the "contexts" accompanying our films, including transcripts.

Archives will, through online access, become an integral extension of the classroom. There will be less of a distinction between archives and classroom (and the public).

Labels: , , , , , , , ,

Content is the person

In discussions about farmfoody.org, the idea came up that recipes represent people in a way similar to the way avatars represent people, only much richer because they contain search engine friendly content. The recipe becomes a way for people to explore farms by navigating to the profile the recipes belongs to, then exploring the connections between users (producers and consumers who are friends).

Content is the person. I think we will see more of this as social media continues to expand an evolve. This can be seen, again on Twitter, where the person is represented by the content. When you go to a person's Twitter page, you see mainly their content. The "profile" is in the background. This allows Twitter to prominently display information about the content stream, because they do not have to deal with ten different kinds of content under ten different categories. Tweet streams have following friends, followers and the number of updates counted. If there were ten types of content on the page, if each time personal information was updated, what would constitute an update? It is clear updates mean the number of posts. Following, followers, post stats. That's it. Clear and concise.

Twitter succeeds by not being all things. It is a tool. Putting the profile in the background the content on the page. We can speak of a "Twitter page" because we know what is on it, unlike a Facebook page, which has almost anything on it. People know what you mean when you say "go to my Twitter page."

Our thinking about the direction farmfoody's features should go in are being directed by these concerns.

Labels: , ,

Thoughts on Twitter

I've been thinking about why Twitter is successful. And why some other services that attempted to compete with Twitter by offering "improved" features, like Jaiku, were not. Twitter had the first mover advantage. In the last month or so the buzz about Twitter has spread to average people through use on cable television networks and by cases where people reported on news events through Twitter by cell phone. Those are well and good, but there are other reasons for Twitter's success.

One is the simplicity of its presentation. The real estate devoted to profile and "friends" or user to user relationships is compact. The profile is brief and concise. The friends (following) and followers are represented by badge-like elements showing the number of following and follower users, with the numbers linked to listings. The followers are displayed as compactly as possible, represented by tiny icons arranged in rows and columns. The various kinds of posts are filtered by clicking a navigation menu item in the sidebar.

Twitter reverses the idea of a profile. The content is the profile and the profile becomes background to the content. When someone visits a person's Twitter page, they want to read the latest posts. The user goes straight to the posts. Most sites make you go to a profile and then to the content. If they like the content and want to know something more about who is posting, they can look at the little profile box containing the name and brief bio or click the link to visit their website. This difference contributes significantly to the usability and attractiveness of the site compared to other social networking sites. Twitter is a tool, not a "Swiss army knife" like Facebook, so it can take this approach. It should be a lesson to any designer or developer, even of more complex, layered sites.

The typical jumble of posts in the Twitter message stream explains why the developers of competing sites saw room for improvement. When replies enter into the message stream, it becomes a single-threaded discussion. It seems reasonable to let users reply directly to a message, creating a threaded discussion. Twitter might look similar to Facebook's Wall, where certain posts may have comments posted to them, creating a limited kind of threaded discussion. I believe this misunderstands how people use Twitter and why they use it. If Twitter users were looking for a simple, online threaded discussion forum, there are plenty of free microforum services to be found.

I believe Twitter users do not want a threaded discussion because they value the immediacy of tweets. There may be a way to capture conversations going between cell phone users in an intuitive and simple way, but I'm not sure what that is. Activity posts, like "What is Steve doing now?" are unlikely to elicit conversation, but as I've seen on Facebook, they sometimes do burst into conversation. I believe friends use the posting of completely uninteresting and unimportant information about their activities as a way to touch base, through a brief conversation. It's like talking about the weather. It may be possible, if the interface is sufficiently transparent, to support threaded discussions. Facebook does a good job implementing the thread as a collapsible series of posts below the post.

One thing in passing, it is clever how Twitter enables linking to other user's Twitter pages through a Wiki-like notation for Twitter-name on replies (using the @ symbol, @twittername). It helps solidify the username as not just a name used for authentication, but as a symbol representing a person. In a way, wikis have had this from the beginning, since it was traditional to create a page using your own name, which could be linked to in "talk page" discussions, and the like.

Labels: , ,

Friendship Rot

I may not be the first, but I have noticed something on social networks that hadn't occurred to me, although it should have, if we have link rot, why not rot in the relationships between friends? I've noticed on our farmfoody.org site there are some lapsed users, since their email addresses are bouncing. It occurred to me that they still have friends but no longer participate in the site. These are ghost relationships suffering from what could be called friendship rot. I suppose Facebook must have millions of people who no longer participate but have accounts and friends. It must have a terrible friendship rot problem. I suppose this form of relationship rot extends to LinkedIn and other sites that depend on navigating networks of relationships between people.

Labels: , ,

Information Evolves and Other Stuff

I've learned to avoid precategorizing anything in my bookmarks.

I don't make a category unless it is necessary, unless I am using it. For example, I need to bookmark Amazon web services, so I create an Amazon Web Services folder, but I don't create a Web folder, with a Web Services folder inside, which I then put the AWS folder in. I don't have any other web services bookmarks yet in Google Chrome so I leave this for later. It just creates more folder depth to dig through before it's needed.

I also try to avoid adding a bookmark just for reference. That just leads to clutter, where I can't find the bookmarks I use on a daily basis, because when you categorize information according to its classification or how it relates to other information, you lose how it relates to you, to useage. For example, if in browsing the web I find a half dozen interesting resources on manual focus lenses, but for cameras I don't use, the bookmarks will obscure the resources I use for manual focus lenses for cameras I do use. What I do now is add bookmarks only when I use the content or need the content now, not for reference or anticipating future use, placing them in the categorized heirarchy. The others I place in Uncategorized (what a wonderful idea, that Uncategorized anti-category!) awaiting the day they become useful and can be categorized, or I place them in a special heirachy called Reference. I don't know if a parallel heirarchy will work but it does keep them out of my way.

I hate digging through deep categories. Yet, for proper categorization to find things later, when you've forgotten where they are, they need deep categories. If I have have five different web services providers, each one needs its own folder and there will be clutter if I just create them all at the same level. So I need to create a Web Services folder, which then adds another annoying, slowing, confusing layer to finding what I want and to my thinking. I want Amazon Web Services when I want it, not digging through Web, Web Services to get to it. What if I use it every day? I have to dig each time.

This brings up another issue: information structures evolve.

One of the problems library scientists create is through this need for pre-creation of categories. They must predict every category that will be needed ahead of time. I was once told I needed to create "name authority records" for every photographer in a database I envisioned of 19th century photographers, before a database collecting names from old card photographs could be built. At that rate, the database would never be built and besides the whole purpose of the project was to collect the names so we could see who was doing what and look for patterns. If we had an authoritative name for each one, we wouldn't be doing the research.

Don't engineer. Evolve. Evolve. Evolve.

We don't need architects and engineers, we need some new job description with a new name, evolvineer or something, for the person who creates a framework for information evolution (maybe like the game Spore?). Perhaps databases like multivariate or Lotus Notes will help get us there.

Labels: , , , , , ,

Blogging the Archives

A vital interest of mine is access to archives. I've been interested in the possibilities inherent in the web and network for increasing access to archives and enabling a greater number of non-academics to browse, organize and surface archive holdings. One of the most significant ways of exposing the holdings of an archives is blogging the contents.

We really haven't got there yet, but I've noticed a small trend, which I hope signifies the beginning of exponential growth, of people blogging artifacts. I do not remember the first site I came across where a blogger was posting pictures of artifacts, usually photographs from an online catalog of a museum, but here are some recent finds.

Illustration Art

All Edges Gilt

If we could just get every artifact in the world's museums and archives photographed or scanned and online, give the tools to blog the contents to millions of ordinary people interested in telling the stories of these cultural objects, think of how rich that would be. I don't know if people will do this, but I do know that ordinary people have a lot to contribute. Academics cannot know everything, they are an isolated individual, no matter how expert they are, and there is a very Long Tail out there of family members, amateur historians, hobbyists and who knows who that know something about cultural and historic artifacts. Maybe they will be willing to contribute. It will likely be only two percent, like Wikipedia authors, but that small percentage can do a lot of good.

As an aside, author and developer Liam Quin has a site, fromoldbooks.org which has great potential to provide fodder for bloggers. The interface to this digital archive of old book scans is easier to use and better than ones I've seen institutions deploy.

I wonder, also, if this phenomena is not somehow similar to the Cinematheque, not just an archive, but concerned that people actually view or interact with the artifacts.

Update: Shorpy is a commercial site, which shows  how successful blogging the archives can be. The site appears to have developed a following, with, I imagine, readers checking in each day to see what new photographs are posted. The blogger acts as curator by selecting images that will be of interest to the readers. Arranging them into albums, possibly by narrative (using Tabloo would be a good way to achieve this).

This fits exactly with the idea of people being able to easily find images of their local area in the past and the idea of "blogging the archives" at its most simplest and effective. The power of simply posting images and their captions, without any commentary, is surprising. It is encouraging to see people are interested and willing to participate in the interpreation and "unpuzzling" of old photographs. One of the pleasures of old photographs is rediscovering what lies behind the mysteries the images present.

Labels: , , , , , , , , , , , , , , , , ,

Why Tag Clouds are Beating a Dead Horse

Tag clouds are dead. I don't want to mince words. I've been waiting for a long time for someone to say so, to let everyone see the elephant in the living room. What interests me is why tag clouds are dead.

About ten years ago I was working on a prototype web application. It never saw the light of day. But it was called Strands and consisted of a wiki-like content management system that allowed anyone (it was based on SoftSecurity) to create pages, to post and edit content. Any author could include single keywords in the text. These would be automatically scooped up and entered into an index. You could display the posts associated with (containing) any keyword listed on a page like search results. The idea was that content could be navigated in any number of ways according to keywords added by users. It's wasn't social. It didn't know the user who contributed the keyword. The idea was to destroy hierarchy and create a user centered order to information, something close to the folksonomy (but not quite because it didn't care about who submitted a keyword). One version did not allow linking between pages, no "wikiword" links, the idea being that all navigation was by keyword links, either in content or on the "strand" pages listing all content belonging to a keyword.

One of the other ways of navigating considered was by popularity of keyword. The system could generate a list of keywords based on how many posts contained or were associated with them. You may start to find the elements of this system familiar. "Strands" are posts listed by tag. Keywords are tags. Navigating by popular keywords is a tag cloud. The ideas for this system partly developed out of work I'd seen on the web where posts were ordered by single keyword. The other reason was I have a terrible time categorizing anything, I can't decide which category something could go in. I am incredibly bad at and hate categorizing anything, so I decided the wiki element would let visitors to my site categorize my junk for me.

If this were not a blog, I'd spare you all this personal history, but it does show you why I am interested in the question of why tag clouds suck.

When I visit a website with a tag cloud, I tend to pay close attention to it. I noticed that I never bothered clicking on them, never used them. When I thought about why, one of the things I noticed was that nearly every tag cloud consisted of a number of large tags I could count on my hand, and the rest were undifferntiated in size. One of the solutions that came to mind was displaying tags by popularity on a logarithmic scale, which could help increase the difference between the less popular tags. I'm not that great at math, so I would need to leave it to someone else to work this out. But the idea is to create greater differentiation visually among the less differentiated tags.

The other problem with this is there are only so many font sizes that are easily usable on the web. This worsens the differentiation problem.

The other concern I had devevloping the keyword based application was that chaos would ensue. People tend to prefer order. Would it help or hurt for people to be navigating by tag? Tags don't always apply to the subject. Their strength is freedom, freedom from controlled vocabularies and rigid meanings, but without those restrictions tag-chaos can reign. Wikis always had a kind of randomness to them and so do tag structured and navigated content.

I almost never click on tags in Wordpress blogs for this reason. It usually produces a result that widens not narrows my search. Nielson observed that clicking on a link has a penalty, and the trouble with tags is they have an uncertantity penalty.

The closest I've ever seen to a realization of the keyword based navigation idea is a photo gallery developed by Alex Wilson some years ago. You can see it still in operation here. It's a great idea and an excellent implementation, I don't know why I didn't go ahead with my own version instead of abandoning it (doubly, since the eventual goal was for organizing photographs). It makes the homepage a tag cloud and each detail page with a photograph displays a vertical row of thumbnails to photographs linked by tags, which is very similar to the way the Strands pages listed posts according to tag (like Flickr pages with the tags next to the image). Alex recently switched to a standard gallery system for this exact reason, that visitors and customers apparently found the tag-navigated album confusing.

I love tags. I use them like I feel they were supposed to be in this blog, I just write any significant word that comes into my head about the subject. I don't care that they create long lists of tags, since I only use them as a memory aid. They are terrible for people navigating the site and categories would probably be better. Tags aid memory, they aid discovery and exploration, but I'm uncertain that they are good finding aids.

I'm sure others have observed this before, but I've kept quiet about it, so I may be late the party, but still, it's a useful discussion, to dissect why tags ultimately fail to live up to the (strange to me) hype they received. Every new web technology seems to be annoucned like the second coming.

So, yes, tag clouds are beating a dead horse. Even the little sets of tags next to blog posts don't really do much for me, not even on my own site, or they don't seem to do much for visitors in my view.

The other thing that tortured me developing the keyword based navigation was whether to allow spaces in keywords, which would prevent combining keywords like chicken+soup and create confusion (sepearte keyword threads of navigation) between "farmers market" and "farmers_market." I worried a bit about misspellings, but not too much since I didn't like controlled vocabularies.

References: Tag Clouds_Rip and ZigTag supposed to solve these problems.

Labels: , , , , , , ,

More Thoughts on Technology and New Visual Journalism

I truly believe there is potential for creation of an online media publishing system centered around the style of visual journalism cameras like the G1 can create. The rhythm of shifting from video to still photography in the hands of a capable, creative visual journalist, could be expressed through an architecture and presentation suited to it. The combination of video and still images have the potential to create in the viewer a sense of surroundings, a picture of the whole event, seen two different ways.

The mix of still and video is suited to the idea of "quick-slow" development, where first captures can be uploaded for rapid presentation with little or no information and then later, more images can be added, stories added to flesh out the first blush images. Video can be edited to explain and give context to the event or stories can be added to give context to the visuals. The combinations are endless, given a sufficiently flexible system.

Brief posts of video or stills can flow onto a stream of consciousness, blog-like, photostream-like, until there is time to reflect on the event, compose stories to give context and explain the images by adding them later. The needs of journalism, immediacy and reflection are met.

By the way, I feel that Flickr represents, not a "photo sharing" phenomena, but a "photo looking" one, which essentially fulfills the function of the great picture magazines, Life and Look. The popularity of Flickr, I believe, is due to the same phenomena, an audience who enjoys learning about the world and getting their information visually.

Labels: , , , ,

Transforming Conversations to Knowledge

This problem of how to transition ephemeral, but timely, information found in forums, Q and A sessions, all the forum-like forms on the network from Usenet to Twitter is one that fascinates me. I've thought about this for a long time without really coming up with any good mechanism for capturing the knowledge and experience of the forum, the group, from scattered individual, unrefined forms, to coherent, refined forms maintained by the community. I think that the idea of automatically transitioning content created by an individual into community property is a great idea. It may meet with some resistance from individuals. But I think it is a good solution to this problem, since anyone can start a conversation that does not just spin out into the oblivion of old forum posts, but can become a seed that grows into a well maintained, coherent, concise source of information.

Where does my interest in this issue come from?

As soon as I got my first website up and running, I wanted an email discussion group. It wasn't long until I was using Smartlist to maintain my own email discussion list. The discussions provided a wealth of information that was otherwise lost or scattered among messages---high signal to noise. One of my tasks was to glean the best information from the list and edit it to create a concise summary of the conclusions drawn in the conversation, which went into a single web page.

While working as a tech support person about ten years ago, it was my practice to glean solutions from our customer forums and distill them into concise answers I could repeat to future customers who experienced the same problem.

I thought there must be some way to automate or smooth this process of collecting the knowledge contained in conversations into a concise article form. It would be necessary to create some kind of bridge from forum to wiki. I thought about this on and off over the years, and tried creating a few tools to help with the processing of forum threads into articles, but until I stumbled across this idea of automatic promotion from individual post to wiki page, I could not see a way to do this that people would actually use.

It really seems this would work well with the quick-slow rhythm of a bliki, to automatically promote "blog" posts to "wiki pages" according to some criteria. I'll have to think about this some more.

In any event, there is another mechanism for easily capturing knowledge from users. We are seeing entire sites developed around a question, like Facebook's "What are you doing now?" or Yammer's "What are you working on?" or Whrrl's "What are you doing and where are you doing it?" with a threaded discussion or map being the result, which is then shared with friends. Sites like del.icio.us use self-interest to capture knowledge from users without their realizing they are doing the sites work for them. For social bookmarking, by giving users the opportunity to store and organize their own bookmarks, they provide the material for communal organization (or discussion, etc. if you take it further).

Labels: , ,

Capturing and Refining User Expertise

One of my longtime interests has been how to create a system that captures the knowledge of experts and refines it into a single resource. I was attracted to wikis early on by their communal authorship, but found the lack of structure unsuitable for my needs. What I wanted, for two of my early efforts, one a site intended to help family photography historians answer questions about old photographs and the other a site for programmers to find help with coding questions, was a way to let users engage in a Q and A and then somehow capture and distill the expertise into a more traditional article format (like a wiki page), which could be maintained by everyone. I wanted to capture the expertise emerging from the group discussion through some mechanism.

I ended up developing a content management system for the coding site, which had the ability to "fold" a comment thread attached to an article back into the article for editing. I also developed a tool, which could take a forum thread and turn it into an article text for editing. These solutions required a lot of manual effort to whip the unruly comments into a coherent article.

All along I wanted to introduce the communal editing feature of a wiki to this process, but I faced the obstacle of how to overcome the distinction between communal content and content owned by the user posting it. I racked my brains to design the system to somehow enable a transition from personal content to communal content, so that question and answer sessions centered around a code example or problem, could be "folded" into a more communal source of information, refined and with conclusions. But never found a solution.

Originally, I had wanted to develop my coding help site as a Q and A site like Experts Exchange. This explains why I needed some way of converting the knowledge captured by the Q and A session, if there were a solution, into an article form. A QandA session usually results in exposing a lot of valuable knowledge from experts. I wanted a way to capture and refine this so people could learn to code better from it.

Stackoverflow.com a Q and A site for coders. It is simply excellent in design and execution. What fascinates me most is their concept of a "Community Post." When a post is edited by more than four users, it it promoted to a Community Post, which is editable by every user and no longer belongs to the original owner. Apparently, they use a wiki-like versioning system for their posts, so the original post is owned by the original posting user, subsequent versions I suppose are owned by their editors (the user who revised it), and after four unique edits becomes the property of the community.

This mechanism provides a smooth transition from traditional _authorship_ to the communal writing style of the wiki where the community is the author and authorship is anonymous. I wish I had thought of it, since the original idea for my site was a "code wiki" that would not just provide solutions to programming questions but help coders learn from the results and improve their skills. I don't want to rehash my failures with phphelp.com, but to highlight an innovative way of providing a smooth transition between individually owned and communal content.

One of the questions raised by this is authorship. People like attribution because it builds their reputation. So in a wiki environment, they lose their attribution. A user's post becomes a community post. So what happens to a user's credit? One solution is to create an indirect proxy for credit in a communal authorship environment, so that good authors get "badges" or "reputations" that they wear independently. Instead of a "byline" for your post, you get a badge representing the amount and effectiveness of your contributions.

Which is better? Everyone owning their own content or communal content? It really depends on the audience and goals of the site. Some people prefer to own their own content and share it. This is how most social media sharing sites work. You own your content and your friends own their content and the site provides a way of sharing it. Social bookmarking sites also enable users to keep their own content separate from others and then the content is mixed and matched through tag navigation. A wiki-style system generally views content as communal. Stackoverflow solved this problem with a novel mechanism for transitioning content from individual to communal status.

It occurred to me this mechanism might be valuable in a so-called bliki system, which is a blog and a wiki combined. In a bliki, users create quick, timely posts like blog entries connected to dates, but they can also edit the content of posts to create and reference wiki pages. This enables users to make quick sketchy entries like a blog, but then later, reflect on those entries with longer posts. This is called "quick-slow" in bliki terms. What if this process could be facilitated by automatically transitioning the "quick" blog post into a "slow" wiki page? Instead of making a blog post then creating a wiki page linked to it with extra information, the blog post would at some point transform itself into communal content, from blog post to wiki page. Authorship would still be retained because each post would still exist in the wiki history. Anyone could go back to the original blog post to see who posted it and what it was about.

Labels: , , , ,

Social Realms: Sharing and Publishing Become One

There is an increasing recognition of the importance of 'social realms' within the context of social networking. Some social sites started out as "walled gardens" where only friends could see social content a user posts. Other sites started out with all content posted being public like a graffiti wall. Social site builders are now recognize there should be many fine graduations of control over viewing and sharing social content. These social realms extend out from the user in concentric circles, from the being able to see their own content ("me"), to friends, to friends of friends, to networks or groups of friends, and finally to the public.

Blogging was always seen as a form of publishing. The new systems emerging now are centered around "social blogging" or "social news feeds" and are called by various names. Facebook merged their "wall" application and their "mini-feed" application in a single feature called The Wall, an example of one of these new forms for facilitating social interaction between small groups of friends in an asynchronous manner (as opposed to chat or telephony). Like Twitter and Jaiku, they enable "social peripheral vision" or seeing what your friends are doing and passing brief notes back and forth to keep in touch or coordinate activities. These posts are not publishing in the traditional sense and are not considered publishing, since in theory, the posts are intended for friends (although some sites offering these services create a kind of public feed everyone can see).

The Wall on facebook has all the elements of Jaiku or other similar sites, a series of blog-like posts limited to a brief snippet of text in reverse chronological order with the ability for users to comment on them. What makes them social is that the posts are seen by your _friends_ who are the only ones who can comment. So you could post about going to the farmer's market on Sunday and a friend could comment by asking you to pick up some tomatoes. Another friend could comment they will be at the same market and will meet you there. Comments are an important feature because they enable individualized topical conversations. If friends could only post to the "circle of friends" feed, the conversation would become disjointed. Social posts are the start of conversation.

This just emphasizes the need for social realms that determine the scope in which social content is accessible. Facebook offers several social realms for Wall posts, your own, your friends, your friends of friends, your network of friends, the public.

The last is interesting, because it brings us full circle. Most platforms were publishing platforms before the social networking craze, then there emerged platforms for social sharing but without any publishing. Now the two platforms are converging into a single platform for sharing with granular control over the social realms into which any piece of content goes, from sharing with a circle of friends to publishing to the whole world and every gradation in between.

Publishing has a completely different feel to it than social sharing. It requires different tools, ones which facilitate authorship, but have no need for defining the social realms in which the works of authorship will be consumed. I had watched the emergence of Twitter and Jaiku but failed to see their signficance, since their posts were so brief. I saw them as being limit blogs, and idea I had toyed with in the late 90s, but bloggers were more interested in longer and longer posts, being literary types. They were interested in publishing. It was finally understanding the social use of these short-message systems (it is no accident the popularity of SMS correponds with the popularity of these small message blog-like systems) to keep people in touch socially that I understood their usefulness. It makes little sense to critcise the inane or brief posts to Twitter as not contributing to human knowledge or letters, the purpose of these sites, as it is said of Jaiku, to maintain social peripheral vision (something I didn't even know I needed and still feels uncomfortable in the "buddylist 24/7" way it is presented). Maybe someone should start a site called "Tome" for long posts of intellectual brilliance contributing to the total of human knowledge, a mirror image of Twitter. Or perhaps that was what Blogger was supposed to be.

The convergence between sharing and publishing, which began with the original c2 wiki and the lowering of barriers to a read/write web, is emerging as a powerful new metaphor for interaction. Publishing will come to be seen as just sharing with everyone. All content, all media will be social and social realms determine the intended audience.

At farmfoody.org, we will be moving quickly to provide our users with this kind of close-knit interaction, which eschews the private message metaphor derived from email and the blog metaphor from publishing. A graffiti wall is too public and random to be of much use, private messages are stultifying and open to abuse since anyone can send a private message across social realms. The blog was intended for publishing, the feed for syndication, but this new format, the social feed or blog, converges sharing and publishing into a form easily digestible and controllable by users.

Labels: , , ,