Quantcast
Channel: Co.Labs
Viewing all 36575 articles
Browse latest View live

Why Anthony De Rosa Is Joining Circa, And What He Plans To Do When He Gets There

$
0
0

Reuters' star social media editor Anthony De Rosa is leaving for a new role: editor-in-chief of fledgling news startup Circa. Circa is a news app (cofounded by Arsenio Santos, journalism crowdfunding platform Spot.us founder David Cohn, Ben Huh of I Can Has Cheezburger fame, and Matt Galligan) that aggregates and repackages news from around the web in a made-for-mobile format: Stories are broken down into "atoms" of information intended to help mobile readers easily consume news as it occurs throughout the day. Some of the app's most intriguing features are the ability to "follow" an ongoing story and the app's ability to un-bold the headlines of stories readers have already clicked on.

Several years ago, a startup De Rosa worked for was acquired by Thomson before it merged with Reuters. In addition to the hyperlocal blog network he cofounded, Neighborhoodr, De Rosa's success as a sort of human newswire on Tumblr and Twitter paved the way for his jump from Reuters' business to editorial side in 2011, when he was named the company's social media editor. Since then, he's become one of the most well-known social journalists online and an active voice in the ongoing conversation about the future of news. He took a little time yesterday to chat with me about his new job, his desire to disrupt the form of the traditional news article, and his editorial vision for Circa. Below is the transcript of our conversation, slightly edited for length and repetition.

So you’re leaving Reuters--why did you choose to go to Circa?

I’ve been talking to Circa even before they started the company--when it was in its infancy it was called something like “The Moby Dick Project,” and I met Ben Huh at NewsFoo, which is this gathering of journalists that happens every year. Ben was at one in Arizona, at the Cronkite School. Ben presented this idea that he had--he had very similar ideas that I had about things that we thought that we weren’t doing the way we should in a digital format for news. We were still kind of stuck in this old newspaper paradigm.

Some of the ideas that he had were that users should be able to remove things that they already know or they’ve already seen, and get rid of the clutter of a news site. Also: Tell me what I need to know in a story that happened recently--that’s the way people write blogs, but it’s not the way people write traditional articles. There’s 10 other, 15 other ideas that he had for The Moby Dick Project--after that he met Matt Galligan, who ended up becoming his cofounder, and that kind of distilled this down to a mobile product, and that’s basically how Circa began, when they decided that what we want to do is figure out a way to present news in a mobile format in a way that nobody’s doing it right now, so that it’s easier for people to consume news on a mobile phone.

I met with Matt a couple months after I met with Ben, and I started to see the early incarnations of Circa and thought it was really amazing. I thought what they were doing is really smart--at the time, I was still working at Reuters, I was still very happy with continuing what we were doing there, but I was always very enamored with what Circa was doing.

Finally, the last month or so, I just decided it was time to do something different, and I kind of was itching to get back into the startup world again, and work for a smaller company. You know, there’s all sorts of politics and issues with big companies--of course, there’s going to be issues with a startup as well, but I was just interested in getting back in that world. Matt is a really smart guy, he has a lot of successes in the startup world, he started two companies, he sold two companies. All the elements just seemed right with Circa--that they’re embarking on something new, that they’re trying to do something no one else has done before, and that they look at news presentation in the same way--that it’s broken--as I do, and they want to fix it. That’s something I always wanted to focus on and make something I do--it’s something I jump out of bed and think about.

Yeah, I know you’ve talked about it a lot on your Twitter feed and you’ve blogged about the state of the article. You know, we have a new site at Fast Company called Co.Labs. It’s sort of become our place to experiment with media. One thing we’ve begun to talk about is ways we can sort of usefully and productively disrupt the notion that the article is, to borrow a phrase from Circa, necessarily the atomic unit of our journalism. One of the things we’ve been doing is we’ve been experimenting with these stub posts. Basically, they’re almost like slow live blogs, where we’ll track a story over a long period of time. It seems like what we’re trying to do with these stub articles shares some kind of conceit with Circa. I wonder if you think, is there some greater evolution of the article going on right now, something maybe less disjointed than a Twitter account or something sort of between a live blog and an article? How do you think the article is evolving online or should be evolving online?

Well, first, I definitely did notice that you guys were doing that, and I was really happy that you’re thinking that way and trying to figure out a better way to present the different things that you were covering. I think there is an evolution, but it seems like it’s taking so long to happen--we’ve been presenting news in a digital format for almost 20 years now, and we’re still really kind of stuck and tied to this inverted pyramid model, which I think is really kind of broken. People have tried and experimented--most of the work I did at Reuters was through live blogging, and I kind of felt trapped in the traditional article format, never really felt it was the proper way to present a story that was in progress. Being able to have more developer resources at my fingertips and working on a smaller, nimbler team--being able to think about how we can do these things and present things in a way that makes more sense--is something that’s really attractive to me. That’s the opportunity I’m going to have with Circa, being able to focus on mobile, where I think more people are spending more of their time than anywhere else.

I think in just the traditional companies that are out there, you look at like the established places like the New York Times and Reuters and so on and so forth--it’s harder because, you know, you have a lot of other factors at play. You have the newspaper, with the New York Times, when they’re producing their articles they have to think about how this is going to be presented in a print format, and are you going to ask your writer to produce two versions, a print version and a digital version, because it really is going to be completely written in a different format? If you think about the way it’s written in digital, and you really want to make it digitally native, you have to break it down to different atoms, and you’d have to have it sequentially written differently. It’s a different narrative format. So--that’s another issue, I wonder if companies that are still printing a newspaper or magazine can figure out a workflow for this.

And Reuters had another difficulty with this, because they’re feeding their news to clients… they have to think about all the clients that they have.

It presents a very complex problem for a lot of people who are still doing print and people who have a lot of different masters.

Right. Just to go back one step to something you said earlier about the problem of being trapped within this idea that every article has to take the shape of this inverted pyramid: I use Circa, and one thing I’ve noticed is that in those little atomized sections, it’s not always the newest stuff on top of a Circa story, sometimes it really is still sort of the same kind of opening you would see in a traditional article. Do you think that, just in terms of how we are trained to consume information, is it more difficult to break away from that pattern than we might assume?

Well, I don’t think the app is perfect yet. I think there’s a lot of things we’re going to have to figure out and think about how we can present this best for people who know a bit about the story already and people who are coming into the story fresh. I think we’re going to have to try to decide what’s the proper presentation that we can use that will allow the reader to make a decision as to whether or not this is the first time they’re reading this article or first time reading something about this topic--and if they already know a bit about it, maybe we don’t show them that background information or that stuff that happened earlier in the story. I think that the app, in some ways, has to become smarter about that, because as you pointed out, we’re still giving them the preamble or the thing that happened way earlier in the story that people who are already reading it 10-15 hours ago already know about. So those are things that will need to evolve. It’s still very much a work in progress and those things will definitely get dealt with over time.

I also would like to see--I know the app is really focused on telling you what you need to know about some of these stories, but when we eventually build up our editorial resources and look to present this on other formats, like tablets, I do want us to think about longer reads and more of a narrative. So it wouldn’t be entirely the “get me up to speed very quickly,” you’d also have an option when you’re sitting on your couch and reading something on your tablet, you’d be able to get into something that’s a little bit more meaty, but still really beautiful, atomized formatting so you can take in bits and pieces of a story at a time--and have different elements of a story, whether it be video or photos. It’d be really interesting and fun to think about how that would look on a tablet format.

It’ll be interesting to see how Circa attacks that--one thing Co.Labs is doing is, they have these stub posts, and they basically branch off features from different parts of these stub posts. But at Fast Company, what we’ve ended up doing is approaching our stub posts almost as growing articles, so by the end you have several sections that have different subheds, and all those stubs have turned into a big article. It’ll be interesting to see how that can be tackled for Circa if you’re planning on taking this multi-platform approach between a phone and a tablet.

Yes, it’s very different. You’re not going to build the same way you build the phone app as you build the tablet app. Much more space, you know people are going to spend much more time reading on the tablet than they do on the phone, so you can go a little bit deeper, you can spend a little more time on narrative. I think you can do stuff like--I don’t know if you saw the thing that The Guardian did about this massive brush fire that happened in Australia. They did such a beautiful multimedia presentation for this. I thought, “this would be perfect for something on the tablet.” And they atomized the story, so you have to step through it. I can see something like that potentially being similar to what we would do on a tablet because we can really take advantage of audio and video--and I know Matt and everyone at Circa is really, one of the things they put high importance on is design and how things look and making it a really beautiful experience, and this thing that The Guardian did I think is just amazing, it’s something that I think when we’re thinking about the tablet format, it’s something that I would really consider as kind of a goal to reach.

Another thing that has occurred to me a lot when I’m looking at Circa, and I think this is a particularly relevant question for you because the habit of outlinking has been such an important issue for you the past few years, is that outlinking behaves so differently in Circa--so it doesn’t happen inside the text, but it’s actually a style that pre-dates the web, it’s footnoted and it’s annotations basically, or footnoted citations. Thinking about design and the experience of reading, is that an improvement over our current attribution system on the web, is that something you agree with?

Well--I think you have to think about how it’s formatted for a mobile device, and I wonder--I see some benefits potentially of having the links directly in the text and that’s something I may consider bringing up when we’re looking at future versions...I don’t see any reason that you’d want to put the links outside of the text unless there was a stylistic reason or they thought for the readers' sake it was creating a better experience. I don’t they’re trying to hide their work at all. I think one of the main things that Circa believes is that you should show as much work as possible and you want to give people all the resources available that went into finding out the information that article presents.... I can see that evolving, it’s something that I think is going to be part of the conversations and deliberations that we have now that I’ll be working for them as the editor-in-chief, and it’s something that I’ll definitely consider more if people feel like having the links in the body of the article would be a better experience than the way that they have it now.

Well, along those lines, another thing I think is interesting is that there’s a lack of bylines, because a traditional rewrite person, or just a rewrite person like, in the contemporary landscape on a web news desk, or a blogger who aggregates news in their posts, they all give themselves bylines, even if the information is secondhand--and Circa has no bylines. I’m interested in that, too--is that more or less transparent, or is that actually an effort toward not taking credit for other people’s work?

The way that Matt described it to me is that it’s sort of more of a team effort, there’s a lot of people who have their hands in all these articles, nobody really owns any of the articles that are presented in the app, so it would get kind of lengthy when it gets to bylines because it’s not really an individual that’s behind each article that’s being presented there. I understand how folks may want to be able to say, “so and so at Circa had written this,” but the way that they’re building these articles, it’s not necessarily in the traditional way--the reason behind it is mostly because there’s not going to be enough there from a single person for it truly be a single byline.

Even at Reuters, if you look at some of our articles, you’ll notice that there’s no byline at the top, and there’ll be maybe “edited by” or it’ll have a number of people at the bottom. That’s indicative of the fact that a lot of that story was written or had been reported elsewhere beforehand--you’ll see that this was in the Wall Street Journal or this was in the New York Times. Byline only comes into play when it’s something that’s been originally reported, someone went out and found this news themselves--when we’re gathering information from other sources, you want to put the sources ahead of the byline, because a lot of this is coming from other places, and we’re distilling it for you. A byline would indicate that we’re doing original reporting here, which is not the case.

I find this so interesting because Circa doesn’t write its own articles in that traditional sense, right? It’s aggregating content from the web and repackaging it, and it almost seems like now, in this era when every news organization is rushing to get a version of the same story as everyone else up on the Internet at the same time, it seems like Circa is in some ways attempting to disrupt the whole notion of a rewrite.

Right. I wrote about that recently--the fact that it drives me insane that we’re spending all this time matching stories when we should just be giving a brief overview and then just linking out very quickly. It’s just spending way too much time on just getting our own version out, which I think is ridiculous, so yeah, I’m really happy to not have to continue to be part of that game and I think that what Circa is trying to do is just credit the sources that found this information in the first place, let people know that they’ve done it, and then brief them and send them out if they want to read more about it.

Correct me if I’m wrong, but there’s no internal social features, like commenting, in Circa, which I think is really interesting because you are super active in your own comment threads around the web. I’m curious what you think of that as well.

Comment would be difficult because it’s such a small format on the phone, but I do feel like Circa hasn’t really attacked how they’re going to use social to get a network effect just yet, and that’s part of what I’ll be doing, is trying to come up with a social strategy for making it more well known, more part of the conversation when a story breaks--that’s something I’ve discussed with the team and I feel like is currently something they could be doing better, making Circa more of a place people are going to go when they hear about a big story breaking. I think social creates that beacon where people are reading that information, to be sent off to learn more about it. I think I’ll be able to help them with that. I have a lot of ideas about how they can do it. It’s something that they haven’t really focused on initially--they were really trying to build a really great app and that was really their focus, but now as I’m joining the team you’ll see Circa out in the wild a lot more, whether you’re spending time on Facebook or Twitter or anywhere else. It just wasn’t one of their priorities starting out, and it definitely will be as I’m joining the team.

So what will be the difference between what you’re doing and what David Cohn is doing?

David is going to be in between development and editorial, sort of the liaison between the two. I’ll be focused primarily on editorial...But we’ll be working very closely together--his role is director of news and mine will be editor-in-chief.

Another observation based on the way you work. On your Twitter account or your Tumblr or wherever, you’ve always been pretty--even though you come essentially from this giant, evolved news wire--you’ve been pretty open with your perspectives and infused a lot of your personality in your work. Circa seems, so far, it’s pretty just-the-facts, it’s not infused with a lot of perspective...do you think that’s something that should remain the same at Circa, are you bringing those skills over from working at a wire, or is that something you see evolving over time?

I think in the mobile, in the phone space particularly, I don’t think people have time for opinion and commentary--they just want, “tell me what I need to know, tell me all the facts, tell me what’s going on.” I think on the phone it will probably continue to be that way. I think when we get to the tablet, there may be potential for commentary, for opinion, for bringing in some people who have a specific point of view--if you have time and want to lean back on the couch and read what someone’s take is on something, that might come into play. But I think on the phone, it’s going to be strictly the facts, strictly get to the point, tell me what I need to know, informing people very quickly about different topics. I think format kind of necessitates that you’re kind of pithy and giving people just the information that’s important.

Is there anything else for Circa that’s part of your vision for how it will evolve?

I think the opportunity to do even more in terms of making the app visually pleasing and getting video into play--right now there’s no video, that’ll be coming in the future. I think the iPad presents some really interesting potential--not just the iPad, but all tablets--so it’ll be really fun and interesting to see how we can do all sorts of things with the tablet...There’s just a lot of things that are really exciting for me that are coming down the pipe with Circa, I think over time people time are going to really understand what the potential is for focusing on news in this format and really thinking about what’s available to you when you break away from the traditional newspaper/magazine/print idea that hasn’t really evolved very much. That’s really the most exciting thing here for me, we’re really focused on “what can we do with these digital devices that’s so native to these devices, that nobody’s really spending all their time thinking about when it comes to news.” I’m really excited to spend all my time thinking about that.

One of the interesting things about Circa is that it relies so much on human beings, and not on robots--I’m curious, I know you’re more on the editorial side, but beyond ads or getting acquired, how can an app like this make money?

Yeah, I know it makes investors really scared when they hear that there are a lot of humans they need to power their companies, like, “Okay, so what’s the algorithm, how do we automate this?” We’ll probably figure out some efficiencies here and there, where we can figure out ways that we don’t need to do certain things, but if you’re in the business of media or editorial, you’re always going to want to have someone as a human layer--otherwise, it just doesn’t work, I mean I don’t feel like you could ever really present a media product that’s completely automated--those companies are out there and most of them are really crappy. They don’t really feel natural and they’re not customized in the way that you want them to, they sort of get part of the way there and there’s a lot of junk that I really don’t care about. The human layer is still very much important.

The way that you make these companies profitable is where you can be efficient and where you can utilize technology so that humans can focus on the important matters and not spend their time doing things that are very repetitive or can be done by machines. It’s kind of a balancing act between the two, but every company has to have some human capital.

Disclosure: I first met De Rosa a few years ago, at Shake Shack in Times Square for lunch one afternoon--he was an API product manager at Reuters at the time. We talked about how my old employer might be able to use the new CMS tool Reuters was launching, and exchanged ideas about social analytics. Since then, we've continued to occasionally share ideas; we've served on panels together and he's included my work in some of his social media posts.

Correction: An earlier version of this story stated that De Rosa was Reuters' first social media editor. He was not.


Want to catch up on the ever-evolving world of online media? Take a look at our tracking post: The Future Of Journalism.



How Editors' Lab Amsterdam Sees The Future Of News

$
0
0

Editors’ Lab Amsterdam looks like any other hack day, with attendees skewing towards the young, male, and sartorially challenged, but the majority of participants are not actually developers but journalists or designers. “Traditional hackathons are designed for coders,” says organizer Antoine Laurent. “We want to involve journalists and designers since the main objective is to introduce them to a more collaborative way of working. So we can't do only coding since then the journalists will just sit there being bored.”

Editors’ Lab, which is organized by the non-governmental organization the Global Editors Network, kicked off in Argentina in 2012 and has since visited newsrooms all over the world from India Today to the New York Times. On this occasion it’s hosted by the Dutch public broadcaster NOS. Teams must consist of at least three members: a journalist, designer, and developer. Some smaller newsrooms have to rent the latter for the event. At the Dutch Editors’ Lab, there were teams from the major Dutch newspapers, broadcasters, and small online-only newsrooms.

"The real difference we see between the teams in is product management," explains Laurent. “Everywhere we have been, you have good coders but are they used to developing and managing news applications. Journalists also have to learn how to write down the requirements and specifications and how to talk to developers. In the U.S (The New York Times hosted the last Editors’ Lab), the room was full of teams who are already working in this setup in their newsrooms.”

Each lab has a theme, which in Amsterdam was “new journalistic tools for reaching young readers.” The team’s concepts ranged from the winner from national newspaper De Volkskrant, a site to help readers 7-12 years old to discover news, to a clever tool to replace the audio in a video with something funnier in order to entice young readers to watch videos on politics. “We focus on tools for journalists which a newsroom can implement to regularly and easily produce innovative content,” says Laurent. He highlights the winning project from Editors' Lab Paris, a CMS plug-in using facial recognition to allow journalists to upload a picture, crawl the database of the newspaper's pictures for related content, and tag people in the photo.

One project which caught my eye was from Internet-only news room Follow the Money, which specializes in complex financial investigations. “Last year before the presidential election in the U.S. Follow the Money did an article on Mitt Romney's tax affairs that's still the most read article on (national newspaper) De Volkskrant,” says Follow the Money collaborator Richard Jong.

Follow the Money wanted to address shorter attention spans and varying levels of knowledge of readers without dumbing down the content. The solution was Story Browser, a presentation layer on top of the original text. “Everything we see on the Internet is a pretty direct translation of straight, old-style newspaper articles,” explains Jong. “We decided to cut it into pieces and let clever algorithms decide what the order of the article should be. It's a cloud of rich media chunks where the items which are hopefully most important for you are displayed proportionally larger. The complete article isn't mapped into the diagram but only the most important fragments. You can read it in a chronological order, read in the order of what your friends think is most important, or what the editors say is most important. If you are logged into the website and we can see that you have never read an article on Bitcoin, we can display the explanation of Bitcoin larger.”

Jong built the new layout by scraping existing content and extracting titles, images, and the most important comments. “Often the comments on financial investigations are way more important than the article itself since they are from financial experts.” This was converted into a JSON file, imported into JavaScript infographics library D3 and used with a force-directed graph layout.

Many of the more ambitious features are not yet implemented, but Story Browser could be a very promising approach to giving new readers an overview of a complex story or set of related stories. “It’s a very ambitious concept and this is one way of doing that. It's not the perfect solution,” says Jong.


Why We're Tracking This Story

Chicken Little says journalism is dying. Well, there are certainly many struggling journals, especially among those that came of age before the Internet. Ask a publisher where the big bucks went, and they're likely to mumble something about Craigslist, the loss of print advertising dollars, and an inability to sustain a newsroom with digital advertising.

On the flip side, Matthew Yglesias writing for Slate Magazine says ignore the doomsayers: The news-reading public has never had more and better information at their fingertips. Thanks to social networks and ubiquitous mobile Internet, anyone can report from anywhere in the world at any time. As a result, more news is available on more subjects than at any previous time in history.

If you ask me, this is the real cause of publishers’ struggles. Many of the functions they used to perform are simply no longer valuable in a world where everybody, and increasingly nobody in the form of automated sensor networks, can report basic information.

Worse still, news organizations by and large missed the boat on Internet technology and are only now starting to catch up, just as an increasing number of new technology companies set their sights on replacing even more functions that journalists used to perform exclusively.

So what value can news organizations provide in order to survive? Fortunately, there’s no shortage of opinion from academics, technologists, and journalists themselves. This is my attempt to track those ongoing conversations and add my thoughts as both a technologist and a journalist.


Previous Updates

What Journalism Can Learn From Open Companies (And Vice Versa)

Gittip founder and open company devotee Chad Whitacre did the unthinkable for a startup: He turned down an interview with TechCrunch. Here’s why he did it, and what that means for journalists and open companies everywhere.

Update: After reading the article below, Gittip founder Chad Whitacre invited me to have an open conversation with him about openness, software and journalism. The result was a 45-minute talk that I'm posting here without comment. For context, scroll down. Otherwise, here's the video:


Yesterday, Gittip founder Chad Whitacre declined to be interviewed by TechCrunch unless he could record and publish the full, unedited conversation online. His reasoning? Gittip is an open company (in fact, it inspired payments startup Balanced, which we recently profiled), and he tries to do as many things out in the open as possible. TechCrunch declined, and Whitacre wrote about the experience on Medium. His main point is that he thinks making all interviews open provides more value for everyone than keeping them closed just so one publication can claim a scoop:

To me, that looks like it exposes journalism as a zero-sum game, and I don’t play zero-sum games, if I can help it. In my worldview, having multiple journalists conducting interviews and having multiple journalists writing stories based on those interviews is an overall win for readers and for humanity.

He’s right. As I’ve written before, in a world where anyone can break news and spread it around the world in seconds simply by tweeting, being first to a story isn’t very valuable. Publications like TechCrunch try to artificially create value by demanding startups give them exclusive stories, but this tactic will become increasingly less effective as companies and individuals find ways to reach large audiences without these publications.

Companies have good reason to go around places like TechCrunch. After posting his thoughts on Medium, two journalists took him up on his offer for an open interview: Brian Jackson of ITBusiness.ca, and Mathew Ingram of PaidContent. In the interview with Ingram, which I live-tweeted for reasons I’ll discuss in a moment, Whitacre explained why he was comfortable offering TechCrunch something they would likely turn down:

“TechCrunch is a machine. How many stories of new startups are they stamping out every day? What value is it to me, building my company, to be just one of another stream of stories that floats by on TechCrunch?”

In this way, an open interview functions as protection from gatekeepers like TechCrunch that provide artificial value. This view is understandable given TechCrunch’s recent reputation for being unstable and full of conflicts of interest. But by trying to protect himself from places like TechCrunch, Whitacre also limits the coverage he’ll see from journalists trying to provide their audiences with more value than just a scoop.

In the interview with Ingram, Whitacre equates journalists synthesizing raw information into understandable stories with engineers creating easier-to-use abstractions from more complicated systems. This is a great metaphor, except that interview material isn’t analogous to a preexisting underlying system. Professional journalists use interviews to extract value that wasn’t there before. Otherwise, why not just interview yourself and post for all to see? (Ed. note: that’s called a press release.)

That’s why journalists rightfully feel a sense of ownership over their interviews. It’s one thing if all you’re doing is grabbing a few extra quotes to footnote a press release. It’s another if you’re trying to tell a complex, in-depth story like the piece Co.Labs editor Chris Dannen wrote about iPad app Paper.

Chris provides value in researching and writing a long piece like the Paper story by tying together many disparate threads of thought into one coherent narrative. He doesn't just do this when he writes his article, he does it by asking the right questions in interviews. If we had live-broadcast these interviews as they happened, publications could have collected and posted all of the best bits with their own framing.

It would be analogous to releasing only the back-end for a web app and having other developers gain traction with their own front-ends before you have a chance to launch yours. You might have planned to release a really elegant and simple version, but now you're starting in a hole, competing with everyone else's features. That's not a recipe for startup or journalism success.

It may not sound like a bad thing for multiple publications to put their own spin on an article, or companies to fork software for their own use. In the interview with Jackson, Whitacre calls this "open-source journalism," and cites it as one of the reasons he prefers open interviews. In fact, this happens all the time in the form of re-blogging and aggregating. The difference is that re-blogging the interview before Chris had a chance to write his article wouldn't be aggregation at all. Instead, it would force Chris to compete with himself to add value to the story.

In fact, this very problem came up during Whitacre's interview with Jackson:

Jackson: “Actually I was just thinking that other writers could watch this and write the story for me.”
Whitacre: “So did you even see that somebody already tweeted that?”
Jackson: “Yeah I did, I was talking to that guy. I’m kidding. I will write this.”
Whitacre: “It’s up to you, right?”
Jackson: “It would be funny, though, if I was too lazy to write it myself I just do the interview and then I’m like ‘Oh, well, this guy wrote it up for me.’”
Whitacre: “It’s open-source journalism, it’s fascinating.”
Jackson: “Why not? Saves me the time. I’ll go write a different story, right?”

Jackson did write his own story, but it begs the question: why? Here I am, writing a piece of it for him. Did he blog his own version just so I would have somewhere to link to?

If you believe that professional journalists create value by interviewing, there’s one other problem with open interviews. If you watch a few of the open calls on Whitacre’s YouTube channel, you’ll notice that they’re fairly performative. Whitacre in particular seems to have mastered an informal but polished style that suits him well.

I mentioned above that I live-tweeted the interview as it happened. One of the reasons I did this was to see how both parties behaved, knowing for certain that they were being watched (by a journalist, no less). During the interview, Ingram admitted that he was “conscious of my questions because I don’t want to look like an idiot when someone watches our interview,” to which Whitacre responded that he would get used to it.

There are a lot of tech personalities who are good at manipulating the press via performance (a certain former Apple CEO comes to mind, for instance). One of the nice things about a private interview is that people often stop performing when they know that they can take statements off the record if they need to. Similarly, as a journalist you can try ways of getting answers that don’t necessarily come off as smart or polite without worrying about what people will think of you.

If you don't think that's a problem, scroll up and re-read the exchange between Whitacre and Jackson I posted. I wonder if Jackson would like to have that back? Maybe he'll tell me in an open interview of his own.


Publishing first means breaking news--and maybe battering your own reputation. Because of the chaos of events like the Boston bombing, media outlets like the New York Post and the crowdsourced “FindBostonBombers” campaign on Reddit routinely identify the wrong suspects--and push reports to the public.

During emergencies like Hurricane Sandy or the 2011 London riots, false rumors and fake photos abounded on social networks like Twitter. When even professional journalists get it wrong, how can you tell fact from fiction and ensure that you are not sharing false information yourself? Do your own digital fact checking.

Claire Wardle is the Director of News Services at Storyful, a startup founded by veteran Irish journalist Mark Little, which verifies social media content like YouTube videos for news organizations. “Lots of people were sharing stuff around Hurricane Sandy which was fake. It would have taken them two seconds to do a reverse image search,” she says. ”After the recent helicopter crash in London, the Guardian had an image up for 2 hours purporting to be of the crash which wasn't verified.” Storyful’s verification process combines tech tools and old-fashioned journalistic skills as described in a recent blog post on how the company verified a video taken during the Boston marathon bombings.

Reverse image search tools like Google Search by Image and TinEye return where an image appears online and therefore help you to track its original source and history. For example, during the London riots in 2011, there was a rumor circulating on Twitter that tigers had been released from London Zoo, including a photo of a big cat on a city street. The photo turned out to be of a big cat which escaped from a circus in Italy in 2008.

Also during the London riots Twitter users circulated a photoshopped image of the London Eye on fire. FourMatch is a paid image forensics tool ($20 buys you a demo key to analyze up to 30 images within 30 days) which can detect whether or not an image has been tampered with. TinEye also sorts results by how much an image has been modified.

When looking at video, Storyful tries to match the location where the video was purportedly filmed with the real location using tools like Google maps and Streetview. Wikimapia, Wikipedia for maps, is useful for identifying districts or buildings which appear on commercial map services like Google Maps. Maplandia has some similar features. Geofedia’s location-based social media search can help to determine if an image or video was actually sent from a given location.

Finally, Storyful checks the social media profiles from which images and video originated and links them to people. Use DomainTools to check domain ownership and Spokeo People Search or Whitepages to find information about people in the U.S. in particular.

What if you want to make it easier for images or video you capture yourself to be verified? Wardle’s first piece of advice is to geotag it. Only a tiny proportion of social media content is currently geotagged. ”Witness (a non-profit which highlights human rights abuses) has two pieces of technology: Informacam and Obscuracam. Obscuracam hides all metadata like location, is specifically for activists who want to give verification information.” The metadata includes information like the user’s current GPS coordinates, altitude, compass bearing, light meter readings, the signatures of neighboring devices, cell towers, and Wi-Fi networks.

“The behavior of the people on the ground is changing,” adds Storyful founder Mark Little. “The activist will point the camera at a minaret before going back to the focus of their story. The general public will start to realize that they should geotag their tweet if they want it to be seen. The motivated crowd will become much more literate in helping us help them.”

If you realize that you have distributed false information on Twitter, one enterprising developer has created a tool called Retwact, which lets you issue a correction or apology. Followers can view a side-by-side comparison showing both the older, incorrect tweet as well as the newer, corrected one.

This update contributed by Ciara Byrne


Today Reuters launched a preview of its next-generation web platform. Called Reuter’s Next, it’s a massive improvement over its legacy site for many reasons. It uses cutting edge technology, which Reuters Tech Editor Paul Smalera took to Twitter to brag about at length two days ago.

The site also adopts a design and presentation concept that’s becoming popular online called “river of news.” The idea is that because users often come into a single article on your website rather than your homepage, every article page should be filled with content, like a homepage. The Atlantic’s business-focused sister site, Quartz, is a pioneer of this concept.

Reuters is taking the concept a step further by treating visits to an article as a signal that a reader is interested in a particular topic. When you click on an article about Facebook, the site will search its database using a new deep tagging system and surround the story with links to other articles about Facebook. In an interview with Nieman Labs’ Justin Ellis, Reuters Head of Product Alex Leo said their goal is to help readers fully understand the subject of the article:

“We wanted to create an experience for users that would give them the right amount of breadth and knowledge that they need from Reuters.”

This sentiment hits right at home for us, because one of the goals of our Co.Labs Tracking stories (you’re reading one right now) is to provide you with a choice: If you’re already familiar with the story, you can stop reading after this update. If you’re new to it and want more information, all you have to do is scroll down. We don’t assume either.

Reuters’ new site is a great first step, but there’s one problem: Even as the website tries to be helpful by giving you context if you want it, every article is still a traditional 800-word news article with a structure that assumes you’re new to the issue. If you want to get all of the new information out of it, you have to read through the entire article, background paragraphs and all.

At the end of the day, content is still the main reason people visit a news website. Adding context as an additional service Reuters provides to the reader is great, but the 100-year-old article format makes it far less valuable than it could be. We’d like Reuters continue to experiment--not just with its website, but with its content.


Publishers will be frightened to hear that Facebook is the new home of grammar--and any Facebook developer will tell you so. Spend some time building a Facebook app (or perhaps now, a Parse app) and you're presented with the Facebook App Dashboard, which is essentially a dynamic mad-lib generator with a few different inputs.

But yesterday Facebook announced that its semantic efforts are much more ambitious than perhaps anyone previously thought, forcing us to consider the idea of machine-written and machine-interpreted language as a part of our normal, everyday conversations, many of which happen (these days) on Facebook.

If you're not familiar with the kind of "language" Facebook speaks today, you can see it has a kindergartner's command of English from the Open Graph overview:

The actor: This is the person who published the story; The app: This is the app that publishes the story on the actor's behalf. Every story is generated by an app and every story includes the app used to create it; The action: This is the activity the actor performs, in our case 'finished reading'; The object: This is the object the actor interacts with, 'The Name of the Wind', a book.

Developers can create custom subjects, objects and verbs, too:

Objects are publicly accessible web pages hosted on the Internet, and almost any web page can be an object. Objects are public information. If there is no common action available that meets your needs, you can create your own custom action type. For example, if you're building an app to track rock climbing you may want to make an action 'climb' where the object is a mountain.

If Facebook learned to be a little more linguistically flexible, there's no reason it couldn't write status updates for you, publish blog posts about trips you've taken, or compile other summary/analysis writing based on passive feedback from other apps that are connected to Facebook. This is Facebook becoming your personal narrator and scribe.

It's all explained in a 2,700-word technical post Facebook published yesterday about the lexical analysis built into its new Graph Search engine. It's no surprise that, as Facebook teaches itself to speak using intelligible statements, it might also learn how to read what people are writing. But this sort of lexical analysis is much more powerful than Facebook is letting on, because it allows Facebook to associate nodes on their network with a potentially unlimited number of sobriquets--making the computer better at understanding what you're talking about with little or no context.

Semantic understanding without context is most vital in search, and especially on mobile search, so this area of Facebook research should be no surprise given that Facebook Home is probably a nascent attempt at grabbing smartphone OS marketshare. But semantic technology doesn't have to be restricted to tasks like search; in fact, this sort of lexical analysis makes it easier for the page to "listen in" on the topic being discussed on the page, serve targeted ads, or spawn calls to action (the way Facebook reminds you to hit up old friends). From Facebook's post:

Our team realized very early on that to be useful, the grammar should also allow users to express their intent in many various ways. For example, a user can search for photos of his friends by typing:
“photos of my friends”
“friend photos”
“photos with my friends”
“pictures of my friends”
“photos of facebook friends”

Facebook has also taught the system to compensate for subject-verb agreement errors and other common grammatical mistakes. But where things get even more portentous for publishers is when it comes to Graph Search's focus on synonyms, dialects, and slang:

The challenge for the team was to make sure that any reasonable user input produces plausible suggestions using Graph Search… The team gathered long lists of synonyms that we felt could be used interchangeably. Using synonyms, one can search for “besties from my hood” and get the same results as if he had searched for “my friends from my hometown.”

If you think that the way people talk on the web will be impossible for machines to ever comprehend, think again. Facebook says it is trying to make Graph Search useful even for incredibly vague, poorly worded queries using some incredibly clever parsing:

Our grammar only covers a small subspace of what a user can potentially search for. There are queries that cannot be precisely answered by our system at this time but can be approximated by certain forms generated from the grammar. For example, “all my friends photos” -> My friends’ photos… In order for our grammar to focus on the most important parts of what a user types, the team built list of words that can be optionalized in certain context: “all” can be ignored when it appears before ahead noun as in “all photos”, but shouldn’t be ignored in other context such as “friends of all” (which could be auto completed to “friends of Allen” and thus shouldn’t be optionalized).

Facebook is building on some pre-existing technologies here, to help handle some of these unpredictable queries using abstract associations:

Our team used WordNet to extract related word forms to let users search for people with similar interests in very simple queries “surfers in los angeles” or “quilters nearby" … In Graph Search, we use a variation of the N-shortest path algorithm, an extension of Dijkstra’s algorithm, to solve the problem of finding the top K best parse trees. Our biggest challenge was to find several heuristics that allow us to speed up the computation of the top K grammar suggestions, thereby providing a real-time xperience to our users.

In fact, Facebook has even taught its system to understand and ignore oxymorons:

A naïve, context-free grammar would allow the production of a wide range of sentences, some of which can be syntactically correct but not semantically meaningful. For example, Non-friends who are my friends; Females who live in San Francisco and are males; Both sentences would return empty sets of results because they each carry contradictory semantics. It is therefore critical for our parser to be able to understand semantics with opposite meanings in order to return plausible suggestions to users.


Today, anyone can report new information and watch it spread through their social network, sometimes reaching millions, without the intervention of a professional journalist. In the days before Twitter, disseminating breaking news took time. To reach a meaningful audience, information had to pass through a series of reporters and editors en route to a television screen or page in a newspaper.

This example of technology replacing a classic journalistic function is a direct threat to the livelihood of many news organizations who once made their names by being the first to every story. To adjust, outlets like CNN and the Associated Press have joined the Twitter rat race and adjusted editorial processes on other mediums to move quicker. But in a world where everyone knows about breaking news almost instantaneously, the rush to be first can cause more problems for news organizations than it solves.

Take, for example, the Boston Marathon bombings. After the explosions, information spread like wildfire. Some of it was false, as is always the case in a breaking news situation, but instead of acting with restraint, many news organizations rushed to report falsehoods and speculations as fact. For example, CNN reporting that suspects had been arrested on Wednesday night, or the New York Post clinging to a rumor that 12 people died in the explosions, or everybody and their mother reporting that the government had cut off cell phone service in Boston to prevent further remote detonations.

Ironically, we’re talking about this issue because despite the democratization of news, journalists still hold real power. Twitter accounts like @AP and @CNNBRK have millions of followers who trust them to report accurately. If you need evidence of that, look no further than when a hacker broke into the AP’s Twitter account and sent a fake tweet that caused automated stock traders to momentarily send the Dow Jones Industrial Average tumbling 143 points.

Circa editor-in-chief Dave Cohn called it an inevitable product of newsrooms trying to be first:

We have the unstoppable force of news organizations that want to be first and want as much attention as possible, especially in times of breaking news. On the other hand we also have the immovable object of technology platforms like Twitter that will inevitably be where news breaks and where people flock to get information, especially when there is breaking news.

I like this description because it correctly summarizes the conflict between two fields converging, but I wonder if news organizations need to be unstoppable. In the wake of both events, several future of news pundits called for journalists to stop trying to be first and instead find other ways to compete with social networks.

Here’s CUNY Professor Jeff Jarvis on an alternative model:

The key skill of journalism today is saying what we *don’t* know, issuing caveats and also inviting the public to tell us what they know
...
If I ran a news organization, I would start a regular feature called, Here’s what you should know about what you’re hearing elsewhere.

Similarly, Mike Annany wrote about silence and timing for the Nieman Lab:

The Internet makes it possible for people other than traditional journalists to express themselves, quickly, to potentially large audiences. But the ideal press should be about more than this. It should be about demonstrating robust answers to two inseparable questions: Why do you need to know something now? And why do you need to say something now?

Both of these authors make the same basic argument: There are ways to add value to news consumers without sacrificing accuracy by always trying to be first. This idea is compelling, but for news organizations worrying about disappearing bottom lines, Dan Gillmor’s argument in the Guardian ought to resonate more:

Information providers forfeit some trust every time they make mistakes. That eventually, one hopes, affects the bottom line, or in a social context, the confidence of one's friends and peers.

In an environment where trust is just about all they have left, perhaps publishers would be wise to rethink whether speed or accuracy is more valuable to the consumer.


The decline in value isn't limited to consumers. Newspaper and magazine publishers used to serve a vital function for advertisers as well. Before the Internet, print publications were one of the few places advertisers, especially local ones, could reach large, wealthy audiences. This gave publishers total control over pricing, making print advertising an extremely profitable business.

Today, no single website has a monopoly on any audience. If advertisers want to reach New York Times readers, they can do so on a variety of similar websites just as easily as they can on the Grey Lady’s site. Not only that, but advertisers now know more about these audiences than ever before. They can tell exactly how many users see ads, whether they click or engage on them, even target individual users based on past interactions. As a result, control over pricing has been flipped on its head. If your prices are too expensive, an advertiser can go elsewhere and reach the same exact people.

How are publishing companies coping? They’re rethinking the value they offer to advertisers.

One approach is to sell so-called “native advertising,” which publications like The Atlantic and Buzzfeed are pursuing with a decent amount of success. The thinking goes something like this:

“The real value we offer isn't the audience on its own, it's our ability to produce content that our audience wants to read. We can sell that expertise to advertisers.”

When this tactic works, the result is content that consumers want to read, share, and interact with at much higher rates than traditional banner ads. This means organizations can charge more and, because the content is tailored specifically to the site it runs on, publishers regain some of the audience exclusivity they could claim in print.

When it backfires, however, the result can be disastrous for the organization’s credibility. Such was the case when The Atlanticpublished an advertisement celebrating the Church of Scientology that felt completely at odds with the publication’s core value of intellectual integrity.


Thanks to the Internet, most of the wealth of human knowledge is instantly and freely available. That’s tough luck for businesses that used to make money by charging for access to information, but it’s not stopping some publishers from taking a gamble that their stuff is still worth paying for.

High-profile publishers like the New York Times have put up paywalls that require readers to pay for unlimited access to their websites, with some success. The Times announced that while ad revenue decreased by 11.2 percent in the first quarter of 2013, subscription revenues increased 6.5 percent.

The tactic isn’t a panacea, however. USA Today publisher Larry Kramer infamously told the world his newspaper’s content “wasn’t unique enough” to charge customers for its content. He’s probably right. It shouldn’t come as a surprise, but research by University of Missouri Journalism Dean Esther Thorson has found a direct link between newsroom investment and paywall success:

Input into the newsroom in dollars had far and away the greatest impact on all sources of revenues -- both advertising and circulation.

This leads me to a most obvious takeaway: If you want readers to value your content enough to pay for it, you should probably value it at least that much, too.


Stay Tuned For More Updates

[Reading a Newspaper: CREATISTA via Shutterstock]

This Startup Is Turning Farmers’ Weather Intuition Into A Big (Data) Business

$
0
0

Meteorologist Steven Bennett used to predict the weather for hedge funds. Now his startup EarthRisk forecasts extreme cold and warmth events up to four weeks ahead, much further in the future than traditional forecasts, for energy companies and utilities. The company has compiled 30 years of temperature data for nine regions and discovered patterns which predict extreme heat or cold. If the temperature falls in the hottest or coldest 25% of the historial temperature distribution, Earthrisk defines this as an extreme event and the company's energy customers can trade power or set their pricing based on its predictions. The company's next challenge is researching how to extend hurricane forecasts from the current 48 hours to up to 10 days.

How is your approach to weather forecasting different from traditional techniques?

Meteorology has traditionally been pursued along two lines. One line has a modeling focus and has been pursued by large government or quasi-government agencies. It puts the Earth into a computer-based simulation and that simulation predicts the weather. That pursuit has been ongoing since the 1950s. It requires supercomputers, it requires a lot of resources (The National Oceanic and Atmospheric Administration in the U.S. has spent billions of dollars on its simulation) and a tremendous amount of data to input to the model. The second line of forecasting is the observational approach. Farmers were predicting the weather long before there were professional meteorologists and the way they did it was through observation. They would observe that if the wind blew from a particular direction, it's fair weather for several days. We take the observational approach, the database which was in the farmer's head, but we quantify all the observations strictly in a statistical computer model rather than a dynamic model of the type of the government uses. We quantify, we catalog, and we build statistical models around these observations. We have created a catalog of thousands of weather patterns which have been observed since the 1940s and how those patterns tend to link to extreme weather events one to four weeks after the pattern is observed.

Which approach is more accurate?

The model-based approach will result in a more accurate forecast but because of chaos in the system it breaks down 1-2 weeks into the forecast. For a computer simulation to be perfect we would need to observe every air parcel on the Earth to use as input to the model. In fact, there are huge swathes of the planet, e.g., over the Pacific Ocean, where we don't have any weather observations at all except from satellites. So in the long range our forecasts are more accurate, but not in the short-range.

What data analysis techniques do you use?

We are using Machine Learning to link weather patterns together, to say when these kind of weather patterns occur historically they lead to these sorts of events. Our operational system uses a Genetic Algorithm for combining the patterns in a simple way and determining which patterns are the most important. We use NaiveBayes to make the forecast. We forecast, for example, that there is 60% chance that there will be an extreme cold event in the northwestern United States three weeks from today. If the temperature is a quarter of a degree less than that cold event threshold, then it's not a hit. We are in the process of researching a neural network, which we believe will give us a richer set of outputs. With the neural network we believe that instead of giving the percentage chance of crossing some threshold, we will be able to develop a full distribution of temperature output, e.g., that it will be 1 degree colder than normal.

How do you run these simulations?

We update the data every day. We have a MatLab-based modeling infrastructure. When we do our heavy processing, we will use hundreds of cores in the Amazon cloud. We do those big runs a couple of dozen times a year.

How do you measure forecast accuracy?

Since we forecast for extreme events, we use a few different metrics. If we forecast an extreme event and it occurs, then that's a hit. If we forecast an extreme event and it does not occur, that's a false alarm. Those can be misleading. If I have made one forecast and that forecast was correct, then I have a hit rate of 100% and a false alarm rate of 0%. But if there were 100 events and I only forecasted one of them and missed the other 99, that's not useful. The detection rate is the number of events that occur which we forecast. We try to get a high hit rate and detection rate but in a long-range forecast detection rate is very, very difficult. Our detection rate tends to be around 30% in a 3-week forecast. Our hit rate stays roughly the same at one week, 2 week, 3 weeks. In traditional weather forecasting the accuracy gets dramatically lower the further out you get.

Why do you want to forecast hurricanes further ahead?

The primary application for longer lead forecasts for hurricane landfall would be in the business community rather than for public safety. For public safety you need to make sure that you give people enough time to evacuate but also have the most accurate forecast. That lead time is typically two to three days right now. If people evacuate and the storm does not do damage in that area, or never hits that area, people won't listen the next time the forecast is issued. Businesses understand probability so you can present a risk assessment to a corporation which has a large footprint in a particular geography. They may have to change their operations significantly in advance of a hurricane so if it's even 30% or 40% probability then they need extra lead time.

What data can you look at to provide an advance forecast?

We are investigating whether building a catalog of synoptic (large scale) weather patterns like the North Atlantic oscillation will work for predicting hurricanes, especially hurricane tracks--so where a hurricane will move. We have quantified hundreds of weather patterns which are of the same amplitude, hundreds of miles across. For heat and cold risks we develop an index of extreme temperature. For hurricanes the primary input is an index of historic hurricane activity rather than temperature. Then you would use Machine Learning to link the weather patterns to the hurricane activity. All of this is a hypothesis right now. It's not tested yet.

What’s the next problem you want to tackle?

We worked with a consortium of energy companies to develop this product. It was specifically developed for their use. Right now the problems we are trying to solve are weather related but that's not where we see ourselves in two or five years. The weather data we have is only an input to a much bigger business problem and that problem will vary from industry to industry. What we are really interested in is helping our customers solve their business problems. In New York City there's a juice bar called Jamba juice. Jamba Juice knows that if the temperature gets higher than 95% degrees in an afternoon in the summer they need extra staff since more people will buy smoothies. They have quantified the staff increase required (but they schedule their staff one week in advance and they only get a forecast one day in advance). They use a software package with weather as an input. We believe that many business are right on the cusp of implementing that kind of intelligence. That's where we expect our business to grow.


Keep reading: Who's Afraid of Data Science? What Your Company Should Know About Data

This story tracks the cult of Big Data: The hype and the reality. It’s everything you ever wanted to know about data science but were afraid to ask. Read on to learn why we’re covering this story, or skip ahead to read previous updates.

Take lots of data and analyze it: That’s what data scientists do and it’s yielding all sorts of conclusions that weren’t previously attainable. We can discover how our cities are run, disasters are tackled, workers are hired, crimes are committed, or even how Cupid's arrows find their mark. Conclusions derived from data are affecting our lives and are likely to shape much of the future.


Previous Updates

A roomful of confused-looking journalists is trying to visualize a Twitter network. Their teacher is School of Data“data wrangler” Michael Bauer, whose organization teaches journalists and non-profits basic data skills. At the recent International Journalism Festival, Bauer showed journalists how to analyze Twitter networks using OpenRefine, Gephi, and the Twitter API.

Bauer's route into teaching hacks how to hack data was a circuitous one. He studied medicine and did postdoctoral research on the cardiovascular system, where he discovered his flair for data. Disillusioned with health care, Bauer dropped out to become an activist and hacker and eventually found his way to the School of Data. I asked him about the potential and pitfalls of data analysis for everyone.

Why do you teach data analysis skills to “amateurs”?

We often talk about how the digitization of society allows us to increase participation, but actually it creates new kinds of elites who are able to participate. It opens up the existing elites so you don't have to be an expensive lobbyist or be born in the right family to be involved, but you have to be part of this digital elite which has access to these tools and knows how to use them effectively. It's the same thing with data. If you want to use data effectively to communicate stories or issues, you need to understand the tools. How can we help amateurs to use these tools? Because these are powerful tools.

If you teach basic data skills, is there a danger that people will use them naively?

There is a sort of professional elitism which raises the fear that people might misuse the information. You see this very often if you talk to national bureaus of statistics, for example, who say “We don't give out our data since it might be misused.” When the Open Data movement started in the U.K. there was a clause in the agreement to use government data which said that you were not allowed to do anything with it which might criticize the government. When we train people to work with data, we also have to train them how to draw the right conclusions, how to integrate the results. To turn data into information you have to put it into context. So we break it down to the simplest level. What does it mean when you talk about the mean? What does it mean if you talk about average income? Or does it even make sense to talk about the average in this context?

Are there common pitfalls you teach people to avoid?

We frequently talk about correlation-causation. We have this problem in scientific disciplines as well. In Freakonomics, economist Steven D. Levitt talks about how crime rates go down when more police are employed, but what people didn't look at was that this all happened in times of economic growth. We see this in medical science too. There was this idea that because women have estrogen they are protected from heart attacks so you should give estrogen to women after menopause. This was all based on retrospective correlation studies. In the 1990s someone finally did a placebo controlled randomized trial and they discovered that hormone replacement therapy doesn't help at all. In fact it harms the people receiving it by increasing the risk of heart attacks.

How do you avoid this pitfall?

If you don't know and understand the assumptions that your experiment is making, you may end up with something completely wrong. If you leave certain factors out of your model and look at one specific thing, that's the only specific thing you can say something about. There was this wonderful example that came out about how wives of rich men have more orgasms. A University in China got hold of the data for their statistics class and they found that they didn't use the education of the women as a parameter. It turns out that women who are more educated have more orgasms. It had nothing to do with the men.

What are the limitations of a using single form of data?

That's one of the dangers of looking at Twitter data. This is the danger of saying that Twitter is democratizing because everyone has a voice, but not everyone has a voice. Only a small percentage of the population use this service and a way smaller proportion are talking a lot. A lot of them are just reading or retweeting. So we only see a tiny snapshot of what is going on. You don't get a representative population. You get a skew in your population. There was an interesting study on Twitter and politics in Austria which showed that a lot of people on there are professionals and they are there to engage. So it's not a political forum. It's a medium for politicians and people who are around politics to talk about what they are doing.

Any final advice?

Integrate multiple data sources, check your facts, and understand your assumptions.


Charts can help us understand the aggregate but they can also be deeply misleading. Here's how to stop lying with charts, without even knowing it. While it’s counterintuitive, charts can actually obscure our understanding of data--a trick Steve Jobs has exploited on stage at least once. Of course, you don’t have to be a cunning CEO to misuse charts; in fact, If you have ever used one at all, you probably did so incorrectly, according to visualization architect and interactive news developer Gregor Aisch. Aisch gave a series of workshops at the International Journalism Festival in Italy, which I attended last weekend, including one on basic data visualization guidelines.

“I would distinguish between misuse by accident and on purpose,” Aisch says.”Misuse on purpose is rare. In the famous 2008 Apple keynote from Steve Jobs , he showed the market share of different smartphone vendors in a 3D pie chart. The Apple slice of the smartphone market, which was one of the smallest, was in front so it appeared bigger.”

Aisch explained in his presentation that 3-D pie charts should be avoided at all costs since the perspective distorts the data. What is displayed in front is perceived as more important than what is shown in the background. That 19.5% of market share belonging to Apple takes up 31.5% of the entire area of the pie chart and the angles are also distorted. The data looks completely different when presented in a different order as shown below.

In fact, the humble pie charts turns out to be an unexpected mine field:

“Use pie charts with care, and only to show part of whole relationships. Two is the ideal number of slices, but never show more than five. Don’t use pie charts if you want to compare values. Use bar charts instead.”

For example, Aisch advises that you don’t use pie charts to compare sales from different years but do use to show sales per product line in the current year. You should also ensure that you don't leave out data on part of the whole:

“Use line charts to show time series data. That’s simply the best way to show how a variable changes over time. Avoid stacked area charts, they are easily mis-interpreted.”

The “I hate stack area charts” post cited in Aisch’s talk explains why:

“Orange started out dominating the market, but Blue expanded rapidly and took over. To the unwary, it looks like Green lost a bit of market share. Not nearly as much as Orange, of course, but the green swath certainly gets thinner as we move to the right end of the chart.”

In fact the underlying data shows that Green’s market share has been increasing, not decreasing. The chart plots the market share vertically, but human beings perceive the thickness of a stream at right angles to its general direction.

Technology companies aren't the only offenders in chart misuse. “Half of the examples in the presentation are from news organizations. Fox News is famous for this,” Aisch explains.”The emergence of interactive online maps has made map misuse very popular, e.g. Election results in the United States where big states like Texas which have small populations are marked red. If you build a map with Google Maps there isn't really a way to get around this problem. But other tools aren't there yet in terms of user interface and you need special skills to use them.”

Google Maps also uses the Mercator projection, a method of projecting the sphere of the Earth onto a flat surface, which distorts the size of areas closer to the polar regions so, for example, Greenland looks as large as Africa.

The solution to these problems, according to Aisch, is to build visualization best practices directly into the tool as he does in his own open source visualization tool Datawrapper. “In Datawrapper we set meaningful defaults but also allow you to switch between different rule systems. There's an example for labeling a line chart. There is some advice that Edward Tufte gave in one of his books and different advice from Donna Wong so you can switch between them. We also look at the data so if you visualize a data set which has many rows, then the line chart will display in a different way than if there were just 3 rows.”


The rush to "simplify" big data is the source of a lot of reductive thinking about its utility. Data science practitioners have recently been lamenting how the data gold rush is leading to naive practitioners deriving misleading or even downright dangerous conclusions from data.

The Register recently mentioned two trends that may reduce the role of the professional data scientist before the hype has even reached its peak. The first is the embedding of Big Data tech in applications. The other is increased training for existing employees who can benefit from data tools.

"Organizations already have people who know their own data better than mystical data scientists. Learning Hadoop is easier than learning the company’s business."

This trend has already taken hold in data visualization, where tools like infogr.am are making it easy for anyone to make a decent-looking infographic from a small data set. But this is exactly the type of thing that has some data scientists worried. Cathy O' Neil (aka MathBabe) has the following to say in a recent post:

"It’s tempting to bypass professional data scientists altogether and try to replace them with software. I’m here to say, it’s not clear that’s possible. Even the simplest algorithm, like k-Nearest Neighbor (k-NN), can be naively misused by someone who doesn’t understand it well."

K-nearest neighbors is a method for classifying objects, let's say visitors to your website, by measuring how similar they are to other visitors based on their attributes. A new visitor is assigned a class, e.g. "high spenders," based on the class of its k nearest neighbors, the previous visitors most similar to him. But while the algorithm is simple, selecting the correct settings and knowing that you need to scale feature values (or verifying that you don't have many redundant features) may be less obvious.

You would not necessarily think about this problem if you were just pressing a big button on a dashboard called “k-NN me!”


Here are four problems that typically arise from a lack of scientific rigor in data projects. Anthony Chong, head of optimization at Adaptly, warns us to look out for "science" with no scientific integrity.

Through phony measurement and poor understandings of statistics, we risk creating an industry defined by dubious conclusions and myriad false alarms.... What distinguishes science from conjecture is the scientific method that accompanies it.

Given the extent to which conclusions derived from data will shape our future lives, this is an important issue. Chong gives us four problems that typically arise from a lack of scientific rigor in data projects, but are rarely acknowledged.

  1. Results not transferrable
  2. Experiments not repeatable
  3. Not inferring causation: Chong insists that the only way to infer causation is randomized testing. It can't be done from observational data or by using machine learning tools, which predict correlations with no causal structure.
  4. Poor and statistically insignificant recommendations.

Even when properly rigorous, analysis often leads to nothing at all. From Jim Manzi's 2012 book, Uncontrolled: The Surprising Payoff of Trial-and-Error for Business:

"Google ran approximately 12,000 randomized experiments in 2009, with [only] about 10 percent of these leading to business changes.”


Understanding data isn't about your academic abilities—it's about experience. Beau Cronin has some words of encouragement for engineers who specialize in storage and machine learning. Despite all the backend-as-service companies sprouting up, it seems there will always be a place for someone who truly understands the underlying architecture. Via his post at O'Reilly Radar:

I find the database analogy useful here: Developers with only a foggy notion of database implementation routinely benefit from the expertise of the programmers who do understand these systems—i.e., the “professionals.” How? Well, decades of experience—and lots of trial and error—have yielded good abstractions in this area.... For ML (machine learning) to have a similarly broad impact, I think the tools need to follow a similar path.


Want to climb the mountain? Start learning about data science here. If you know next to nothing about Big Data tools, HP's Dr. Satwant Kaur's 10 Big data technologies is a good place to start. It contains short descriptions of Big Data infrastructure basics from databases to machine learning tools.

This slide show explains one of the most common technologies in the Big Data world, MapReduce, using fruit while Emcien CEO Radhika Subramanian tells you why not every problem is suitable for its most popular implementation Hadoop.

"Rather than break the data into pieces and store-n-query, organizations need the ability to detect patterns and gain insights from their data. Hadoop destroys the naturally occurring patterns and connections because its functionality is based on breaking up data. The problem is that most organizations don’t know that their data can be represented as a graph nor the possibilities that come with leveraging connections within the data."

Efraim Moscovich's Big Data for conventional programmers goes into much more detail on many of the top 10, including code snippets and pros and cons. He also gives a nice summary of the Big Data problem from a developer's point of view.

We have lots of resources (thousands of cheap PCs), but they are very hard to utilize.
We have clusters with more than 10k cores, but it is hard to program 10k concurrent threads.
We have thousands of storage devices, but some may break daily.
We have petabytes of storage, but deployment and management is a big headache.
We have petabytes of data, but analyzing it is difficult.
We have a lot of programming skills, but using them for Big Data processing is not simple.

Infochimps has also created a nice overview of data tools (which features in TechCrunch's top five open-source projects article) and what they are used for.

Finally, GigaOm's programmer's guide to Big Data tools covers an entirely disjointed set of tools weighted towards application analytics and abstraction APIs for data infrastructure like Hadoop.


We're updating this story as news rolls in. Check back soon for more updates.


[Image: Flickr user Mahalie Stackpole]

Google Translate's Gender Problem (And Bing Translate's, And Systran's...)

$
0
0

Google Translate is the world's most popular web translation platform, but one Stanford University researcher says it doesn't really understand sex and gender. Londa Schiebinger, who runs Stanford's Gendered Innovations project, says Google's choice of source databases causes a statistical bias toward male nouns and verbs in translation. In a paper on gender and natural language processing, Schiebinger offers convincing evidence that the source texts used with Google's translation algorithms lead to unintentional sexism.

Machine Translation And Gender

In a peer-reviewed case study published in 2013, Schiebinger illustrated that Google Translate has a tendency to turn gender-neutral English words (such as the, or occupational names such as professor and doctor) into the male form in other languages once the word is translated. However, certain gender-neutral English words are translated into the female form . . . but only when they comply with certain gender stereotypes. For instance, the gender-neutral English terms a defendant and a nurse translate into the German as ein Angeklagter and eine Krankenschwester.Defendant translates as male, but nurse auto-translates as female.

Where Google Translate really trips up, Schiebinger claims, is in the lack of context for gender-neutral words in other languages when translated into English. Schiebinger ran an article about her work in the Spanish-language newspaper El Pais into English through Google Translate and rival platform Systran. Both Google Translate and Systran translated the gender-neutral Spanish words “suyo” and “dice” as “his” and “he said,” despite the fact that Schiebinger is female.

These sorts of words bring up specific issues in Bing Translate, Google Translate, Systran, and other popular machine translation platforms. Google engineers working on Translate told Co.Labs that translation of all words, including gendered ones, is primarily weighed by statistical patterns in translated document pairs found online. Because “dice” can translate as either “he said” or “she said,” Translate's algorithms look at combinations of “dice” in conjunction with neighboring words to see what the most frequent translations of those combinations are. If “dice” renders more often in the translations Google obtains as “he says,” then Translate will usually render it male rather than female. In addition, Google Translate's team added that their platform only uses individual sentences for context. Gendered nouns or verbs in neighboring sentences aren't weighed in terms of establishing context.

Source Material, Cultural Context, And Gender

Schiebinger told Co.Labs that the project evolved out of a paper written by a student who was working on natural language-processing issues. In July 2012, a workshop was held at Stanford University with outside researchers that was turned, post-peer review, into the machine translation paper.

Google Translate, which faces the near-impossible goal of accurately translating the world's languages in real time, has faced gender issues for years. To Google's credit, Mountain View regularly tweaks Google Translate's algorithms to fix translation inaccuracies. Language translation algorithms are infamously tricky. Engineers at Google, Bing, Systran, and other firms don't only have to take grammar into account--they have to take into account context, subtext, implied meanings, cultural quirks, and a million other subjective factors . . . and then turn them into code.

But, nonetheless, those inaccuracies exist--especially for gender. In one instance last year, users discovered that translating “Men are men, and men should clean the kitchen” into German became “Männer sind Männer, und Frauen sollten die Küche sauber”--which means “Men are men and women should clean the kitchen.” Another German-language Google Translate user found job bias in multiple languages--the gender-netural English language terms French teacher, nursery teacher, and cooking teacher all showed up in Google Translate's French and German editions in the feminine form, while engineer, doctor, journalist, and president were translated into the male form.

Nataly Kelly, author of Found In Translation: How Languages Shape Our Lives And Transform The World, whose firm offers language-technology products, told Co.Labs that a male bias in machine translating is extremely common. “If you're using a statistical approach to produce the translation, the system will mine all past translations and will serve up the most likely candidate for a "correct" translation based on frequency. Given that male pronouns have been over-represented throughout history in most languages and cultures, machine translation tends to reflect this historical gender bias,” Kelly said.

“The results can be highly confusing, even inaccurate. For example, in Google Translate, if you translate engineer into Spanish, it comes out as the masculine ingeniero, but if you put in female engineer, you get ingeniero de sexo feminino, which means something like a male engineer of the feminine sex. This sounds pretty strange in Spanish, to say the least! If you type female engineer into Bing Translate, you get ingeniera, which is technically correct. But still, you have to specify female in order to produce a feminine result. You don't have to specify male engineer to get ingeniero. You only need to type in engineer. [There is] an inherent gender bias in most machine translation systems."

The Statistical Nature Of The Corpus

The reason why this happens is statistical. In every language that Google Translate operates in, algorithms process meaning, grammar, and context through a massive number of previously uploaded documents. These documents, which vary from language to language, determine how Google Translate actually works. If source material used for translations has an aggregated bias in terms of one gender being preferred over another, that will be reflected in translations received by users.

When a user on Google Groups questioned male gender bias in Hebrew translations in 2010, Google's Xi Cheng noted that “Google Translate is fully automated by machine; no one is explicitly imposing any rules; the translation is generated according to the statistical nature of the corpus we have.”

According to Schiebinger, machine translation systems such as Google Translate use two separate kinds of corpuses. A “parallel corpus” with text in one language that is used to compare a translation in another language, while a large monolingual corpus in the target language being translated to is used to determine grammar and word placement. If masculine or feminine forms of words are systematically favored in the corpus used, it leads the algorithm to translate in favor of that particular gender.

Machine translation ultimately depends on translators and linguists giving context to both algorithms and the source material they use. Google Translate, Bing Translate, and Systran all do a stunning job of providing instant translations in a staggering array of languages. The challenge for translation platform developers is how to further refine their product and increase accuracy--something we'll see more of in the future.

[Teacher Image: Everett Collection via Shutterstock]

This Lawyer Worked On The JOBS Act. Now He's Starting A Crowdsource Platform

$
0
0

It seems like you can use crowdfunding to pay for anything these days. A movie. A space capsule. A video in which a politician smokes crack. And so on. General-interest sites like KickStarter and Indiegogo have grown consistently over the past few years, but smaller, niche funding sites are also on the rise as a way for users to more easily reach their target audiences. And financing options are expanding as the Jumpstart Our Business Startups Act, known as the JOBS Act, is incrementally implemented.

“I had an opportunity to really work up close with people in the crowdfunding industry, people in the traditional financial services industry, and a number of staffers,” says Nathan Bennett-Fleming, a lawyer who worked on the JOBS Act. After his experience on the Hill, Bennett-Fleming saw a need for a site aimed at bringing crowdfunding to groups that might not otherwise have the option. In fact, he felt the need so urgently that he cofounded a crowdfunding platform called BlackStartup, which is predicated on the very bill he worked on—an auspicious sign for the potential of the bill.

“Just to be around that innovation, I guess [that’s] what got me into the entrepreneurial mind-set in thinking about how this regulatory shift could benefit the African-American community,” Bennett-Fleming says.

Up Close With New Legislation

Acts from 1933 and 1934 have limited who could invest in new businesses to friends and family or accredited investors—people and financial instruments with certain standing or wealth holdings. But the JOBS Act, which went partly into effect during April 2012 and will take full effect later this year, will allow small businesses to raise $1 million through crowdfunding and changes laws that once required businesses to go public if they had 500 investors or more, a number too easily achieved in crowdfunding to be relevant.

“Following law school, I was a fellow for the House Financial Services Committee and that’s when I started working on the crowdfunding legislation,” Bennett-Fleming says. “Having an opportunity to really work up close with the bill, make some amendments to the bill, try to think about all the elements in terms of fraud that could occur, and really put yourself in the shoes of the investor was a very big experience for me. For example, the limit on equity investments on crowdfunding is $1 million. I had a role to play in that debate. We could have set that number at $3 million. We could have set that number at $5 million. In fact, I wish we would have set that number a bit higher when I was working there.”

Bennett-Fleming and his cofounders, Olugbolahan Adewumi, Elgin Tucker, Kyle Yeldell, and Christopher Hollins, are positioning BlackStartup as a service that not only provides a crowdfunding platform, but also matches new projects with a mentor to give tailored advice on things like a reasonable target-funding amount and a campaign strategy.

“A number of the original crowdfunding sites offered some form of return on your investment until they got in trouble with their state regulatory agencies,” he says, but not everybody that needs funding for a good business idea is in the position to craft a compelling campaign on extant sites. “The big players are the one-size-fits-all crowdfunding platforms, but on a site like ours, we can provide support,” says Bennett-Fleming.

A (Really) Lean Startup Model

BlackStartup was able to go live quickly by building on a white label crowdfunding service called Launcht. By cutting out development time, BlackStartup could enter the market immediately and begin experimenting with the best way to deliver their service, rather than putting time and money into a customized platform up front.

“We actually experimented with a number of white-label platforms before settling with Launcht, but it was not very difficult to get set up,” Bennett-Fleming says. “What it shows is that it won’t be difficult for other people to enter this space. Our competitive advantage is going to be our people, our network, our community, and our ability to continue to innovate and not just stay at this white-label stage, but create a platform that’s designed specifically for our users. We wouldn’t have learned that as quickly if we hadn’t made the white-label decision.”

BlackStartup is generating interest by running business plan contests at colleges, including historically black colleges, around the country and working to partner with organizations like Washington, D.C., Social Innovation Organization to lay groundwork with the next generation of entrepreneurs.

“In terms of people who have great ideas and may not necessarily have access to more sophisticated funding mechanisms, that target demographic is probably college students and graduate students,” Bennett-Fleming says. “When we have a critical mass of African-American innovators, we want to be able to align ourselves with that critical mass.”

Bennett-Fleming says that at its core, BlackStartup is motivated by the African-American startup gap. Research indicates that African-Americans open businesses at higher rates than non-minority groups, but they produce projects with a lower overall success rate. Data also indicate that procuring startup funding is a major obstacle for African-American-owned businesses.

“The goal is to clearly market, target, and gear ourselves to the African-American community, but by no means do we have any restrictions,” Bennett-Fleming says. “We’ll let the people self-select what projects they place on the site.”

[Image: Flickr user Scott Wilcoxson]

Why You Should Try Hacking Books

$
0
0

A few years ago, publishers were looking at what was happening to the music industry and saw digital in an apocalyptic light--now they’re racing to join the digital content fray. “There are still challenges and the margins are still smaller on the digital side, but publishers are figuring out that it’s not the end and they’ve embraced it much more,” said Steven Rosato, event director of the largest book trade show in North America, Book Expo America, which is happening this week in New York.

But in an industry that has hardly changed in centuries, what is there to hack?

An Industry In Need Of Rebuilding

“All the growth in the publishing industry is coming from digital products and services,” says Rosato. Other industry types have even started to see Amazon’s buyout of Goodreads in a positive light. “It has actually created more excitement,” said Joanna Stone-Herman, CEO of Librify, a digital book club startup. “The big multiple paid makes you realize that this is where great value is being created.”

With thousands of books published every year, getting books noticed in the crowded online environment is one major problem that could be tackled by software. “We like to say that digital is very good for hunters, but not very good for gatherers,” said Steinberger, CEO of Perseus Books Group, a leading independent publisher and publishing service provider. “If you know the book you want, it’s a solved problem.” Book discovery is not confined to comparing authors or genres. There are many different factors that influence your experience of a book, such as where you are geographically (on vacation or at home), your personality, and current events. “I think that a lot of recommendation engines are pretty lame: ‘People who bought this book also bought this,’ is fine. But it doesn’t know much about you,” said Rick Joyce, Perseus’s chief marketing officer. Gift giving is another aspect of discovery that he thinks isn’t nuanced enough. It goes without saying that a gift to your mother and a gift to your best friend are completely different.

There are big data projects lying in wait, too. “The better contextualization, the better data you can derive in terms of how a book is written--the language, and the voice.” said Jacik Grebski, co-founder of Exversion, a platform that turns any public dataset into an API, and one of the mentors at the hackathon. “If you can recommend based on that, it’s very powerful.”

“Digital is part of the publishing DNA now,” Rosato explained. “As much as any other part of publishing. At Book Expo, the digital was a separate piece, siloed in the scope of the event, but I always saw that line blurring. I felt that Book Expo has the same opportunity as South by Southwest--an investment in startup atmosphere but centered around the content of books (instead of music and film).” He just added a direct-to-consumer element to the event and is pushing to make it more accessible to the public.

What else is there for hackers in the book industry? User interface, says Stone-Herman, along with almost every other facet of this well-preserved, if antiquated, industry. “Suddenly everything, including what we define as a book, has changed, and it’s rethinking everything across the entire value chain, from how a book is created, to how a book is published, to how a book is distributed,” he says. “That creates a whole bunch of new opportunities for new tools and companies to lead the next generation of books.”

“We feel we’re at a tipping point,” said David Steinberger, CEO of Perseus. “This is no longer an early adopter market, it’s mainstream for readers. And it’s time to take full advantage of that change.”

Building SXSW For Books

In a packed room in midtown Manhattan, Joyce introduced the publishing hackathon. He suggested guidelines, pointed out the mentors on hand, and called up the API sponsors to present, such as the social reading platform Readmill and the leading book publisher Pearson. The hackers would have 36 hours to complete a project, which would then be judged by a panel of venture capitalists, tech experts, and book industry heads. Six finalists and awards would be chosen at the end. During his introductions, he made an important point: “There are a number of traditional assets that publishers and authors have created that work pretty well and their intent is powerful. But they’re all fair game for reinvention.”

The story of this hackathon starts months ago in a coffee shop where Rosato was meeting with Stone-Herman. They talked about the changes that were happening in book publishing and how they wanted to turn Book Expo into a South by Southwest for books. The tech section of the event had been growing rapidly, but they wanted to do something dramatic to take it to the next level. They decided on a hackathon that would meld two groups, people from the tech world and those in the book industry. The finalists would compete at Book Expo.

Stone-Herman pitched the idea to Joyce and he was impressed. He partnered with her and took hold of the event planning reigns. “There is more money and inventiveness in publishing in the last year or two than in the last 20,” Joyce said. “It seemed like an opportune time. But also, I felt that the answer on discovery is not going to be figured out by industry insiders alone. We need more provocative conversation between publishers and digital minds that don’t spend all their days selling books.”

How To Hack On Books

“As far as I’m concerned, anybody who uses paper is old school, there [are] optimized ways of getting your message out there,” said Jason Saltzman, founder and CEO of the startup coworking space Alley NYC. Developers who are working in a saturated tech marketplace can easily take a side project and have groundbreaking results in a vertical like publishing, in what Saltzman calls the path of least resistance. “The hackathon introduces that vertical in a cool way. They get to meet like-minded people and solve issues around that vertical, and it opens up their eyes to a whole world of possibilities to work in.”

Libraries have historically been on the fore of linking books with data, and in the last 18 months, libraries have made some big moves. The New York Public Library was a sponsor of this hackathon, as well as an API partner. The Digital Collections API (one of the many projects NYPL Labs has been working on) allows developers to work with NYPL’s catalog data and records. “We think of books as books, and we forget that they are containers,” said David Riordan, product manager for NYPL Labs and hackathon mentor. “It’s the thought, ideas, construction, and organization. And finding ways to get into that is what librarians have always done. What does it look like if a library is a big suite of APIs? Not just books, but what about what’s in those books? Our shared cultural history is in these materials, and by finding ways to extract them, we make wholly new resources from which we can do research, tell stories, inspire ourselves, and discover our future.”

Richard Nash, a publishing entrepreneur and hackathon judge, said the hackathon really boiled down to experimentation with metadata. “The goal of this hackathon fundamentally is to take both the chops of New York City’s programmers and the data, feeds, and APIs out there in the world, to break books out of their containers and connect them up to everything that books are about.” Nash currently is the vice president of community and content for Small Demons, an online platform that lets readers navigate from book to book using the subjects mentioned in each book. “Everybody in the industry says word of mouth is the most important thing--metadata is the greatest amplifier of word of mouth.”

What The Hackathon Teams Built

At the end of the hackathon, 30 teams presented at demos. There was Captiv, a recommendation app that analyzes your Twitter feed and trending topics to suggest different types of books at different times. Evoke, a website, connects you with characters based on a desired emotional experience. Book City suggests books that take place in your current location or a location you want to visit. KooBrowser is a plugin that looks for keywords in your browser and gives recommendations based on what you read.

Other projects included Literary Atlas, which sends you notifications when you pass by locations mentioned in a book; Bookluvrs, a cross between Grindr and Goodreads where you see the books people are reading near you; Book Discovery Inside, where an e-book automatically updates recommendations on the last page of the book; and Coverlist, a scrolling list of randomly generated book covers. Most of the final projects seemed well thought out and were able to pare down the congestion of the digital ecosystem.

Out of all the hacks, Evoke, Captiv, Book City, KooBrowser, and Coverlist were chosen to move on to the finals. Each one will work with mentors to refine their product before Book Expo on May 31.

Going forward, Joyce and Stone-Herman are planning to continue the hackathon as a yearly event, one that brings new ideas and innovators into a traditionally insular industry. “I think that the insular nature is going to need to change to keep up with the changes happening,” Stone-Herman said. “This can be a model for the kind of collaboration we need.”

[Image: Flickr user Horia Varlan]

Two People Doing The Same Job? It’s Not Crazy For Engineers

$
0
0

What would happen if you asked your boss for another person to do your job with you? Would you be laughed at? Demoted? Fired? It turns out that developers have been working together to complete single tasks for decades, using a practice called “pair programming.” The basic idea is simple: Two developers sit in front of one computer. One programmer “drives,” typing out actual code, while the other observes and guides the driver, catching mistakes, and suggesting high-level strategies for completing the task.

Although it might sound counterintuitive and costly to employ two engineers to do one thing, its proponents swear that it actually saves money and time. Michael Kebbekus, a software engineering manager for collaboration software company Mindjet who spends 80% of his time pair programming, says the practice reduces costs and increases innovation by forcing developers to think through their decisions early:

When you work alone, you only have yourself to guide you, and every idea you come up with seems like a good one. When you pair program, you have the perspective of a colleague, and every idea is just a starting point for something better. Before you start typing, you verbalize a solution, and in explaining your thoughts out loud you discover aspects of a problem you didn't even consider, and better yet, your partner does, too. A quick conversation of refinements takes you through multiple iterations of an idea and past many pitfalls that would've lead to bug fixes and costly refactoring later on.

Kebbekus believes that these benefits can apply far beyond engineering to divisions like project management, marketing, and even legal. Thanks to the small, tight-knit nature of his team, he already leads a project manager who has started working in pair with engineers on the product roadmap.

For him, what was working well was being able to put a little bit of effort into a bunch of different ideas he was having for ways to take the product. We would work together in this really early, light phase of whiteboards and drawings so that before he wasted time making Photoshop mockups, he had already worked through problems like, "Oh wait, it would be silly to do a modal window because it would be hard to use." Even as nondesigners, we were able to raise questions when he explained features to us.

Although he acknowledges that it might not be beneficial to every division for every type of project, Kebbekus believes that everybody should at least try putting people together to work on high-value projects. His advice is to slowly start asking team members to work together on bigger, higher-cost projects and gradually formalize pairing people as they get used to the process.

Get people in a room together, because that will start the feedback loop of somebody lifting their head up and saying 'hey, what do you guys think about this stuff? I'm going in this direction.'

The earlier conversations are happening, you're actually getting a more valuable output or saving costs by not going on a path you don't need to go down.


This is an ongoing story, so we're adding to it as news rolls in. Read on to learn why we're tracking lessons in engineer thinking. Or skip below to read previous updates.


What This Story Is Tracking

Software developers and designers are among the most productive workers in any office when things run smoothly, fixing problems and launching features at breakneck speed. But here's a dirty little secret: Developers are also the laziest people at your company, and that isn't a bad thing. Unlike most professions, where output is additive, a good engineer will actually eliminate lines of code from a product over time by finding easier ways to solve problems. Having the discipline to constantly throw out your own work in order to save time requires a specific kind of laziness unique to the technology and design fields.

Still, most companies divide their staff into "technology people" and "nontechnology people," ours included. This divide makes it harder to even know how to talk to engineers, never mind learn from the way they think. So what do your engineers have to teach you? Read on for lessons.


Previous Updates

Everyone in technology is searching for talent. Know any good Rails engineers? Have a good front-end designer? Got any freelance contacts? When it comes to technology talent it's a seller's market, at least here in New York. So I wasn't too surprised to find this Facebook post from an entrepreneur friend of mine:

The full story: Jack Phelps, who is chief product officer at Brightbox, needed a Linux engineer for his NYC-based team, which makes cell phone charging kiosks. (Venue owners install them in public places so that patrons can get a quick charge.) He told me:

We had trouble diagnosing an issue with graphics driver support in Linux on some new hardware and didn't have the expertise in-house to figure it out. Googling for solutions, we stumbled on the blog of Magnus Deininger (ef.gy), a freelance hacker in Germany who does a lot of open source and embedded development and who had recently blogged about solving a similar issue. We asked him for advice, and he said he'd be happy to take a look, although he was getting married that weekend, so he wasn't quite sure if he'd get to it immediately. But sure enough, about a week later he was back with a quick solution for us. He didn't want to be paid because it wasn't too much work and he didn't want to complicate his tax situation with a few hundred dollars of random income. So instead we thought we might send him a gift. He didn't have a wedding registry, so we were out of luck there. We considered bitcoins but it seemed like kind of a lame present unless we managed to get ahold of one mined by Satoshi himself or something. We thought about a Beagle Board, a USB protocol analyzer that had saved our ass with another recent hardware question, but thought he might already have one. We were about to spring for some artisinal Brooklyn fare--Brooklyn Brine pickles or Kings County beef jerky--when one of our programmers had the idea of buying him a drone. The Parrot AR 2.0 quadcopters are pretty awesome for a hacker/hobbyist (I mean come on, you can control them with a smartphone's accelerometer and stream the POV footage back in real time). We're sending him some jerky too to go along with it.

And in case anyone was wondering:

PS: the drone is not autonomous, it doesn't hurt people.

A brilliant move that reminds me of one smart recruitment tactic I saw last summer at the social startup Sonar.me. Last summer, desperate for backend engineers, they posted this compensation package. The package included everything here if you accepted their (market rate!) engineering job:

  • A Zero Gravity Experience (in other words, they’ll fly you to near-space and let you float around)
  • A “Round The World” flight ticket
  • 1 year supply of scotch (“For the long nights,” they say.)
  • Hang Gliding trip
  • Sky Diving trip
  • Surfing lessons
  • 1 Ticket to SXSW and any big data conference
  • 1 year gym membership
  • A night’s stay at the Waldorf Astoria
  • New wardrobe from uniqlo
  • Makerbot 3D Printer
  • 64GB iPad 2
  • High-def video projector

The actual cost of all this stuff is probably fairly competitive when compared to a cash bonus with equivalent attractiveness. The cost of assembling the package? A few weeks of intern time. I'll be surprised if more startups don't adopt this sort of approach to recruitment, especially as a way of self-selecting adventurous, creative talent.


Distribution is a feature, not a marketing plan. Over at Redpoint, Tomasz Tunguz wrote a great little article last week about what he called "startup judo," the way successful startups find something they can use as leverage against competitors that doesn't fight strength with strength. This secret sauce can be anything, but Tunguz mainly highlighted distribution.

Startups shouldn't rely on more manpower, bigger ad budgets or data access advantages in fields where there is a large incumbent. During its infancy, Google won with distribution. Most internet companies believed search wasn't valuable. Google thought differently. They offered to power all the major portals' search engines and eventually dethroned them. This was Google's secret.

If that description doesn't sound like distribution in the traditional sense, it's because it isn't. The key to Google's early success wasn't pursuing brute-force tactics like shipping more copies of software, gaining more users, or even being the only search engine in town. It was finding a way to insert their slightly better version of search into people's lives without pissing off other players. It's easy to forget that Google found early success by providing search services to other companies. Once they found that initial entrance point they were able to rapidly expand and crowd out the very companies who helped them get a leg up. Google is far from the only one to use this approach. Amazon used cheap Internet book sales as a way into the web services market. Yammer worked its way into thousands of offices by allowing any employee to start a network for free without asking permission of higher-ups. There are hundreds more examples from tiny startups to large tech companies building distribution into their product in order to get a foothold and then starting to expand.

Why are tech companies so successful at this strategy? Probably because they're often run by engineers. If you look closely at these examples, you'll see a combination of two common developer thought patterns we've discussed before: breaking off small chunks of complex problems and using lateral thinking to less obvious solutions but better solutions. Used together, the two techniques can make a company unstoppable.

Most traditional companies would see distribution as a problem to be solved after a product is finished. Once everything is built, you kick out the engineers and drag in the marketing gurus to convince people to start using it. The genius of successful tech companies is that they realize that the process doesn't have to be linear. You don't need to beat every competitor at everything, and you don't have to be finished with your product before you start to spread it. In fact, they tend to build distribution into their products as a feature from the beginning.

Facebook is perhaps the best example of this kind of logic at work. The original version of the site was almost embarrassingly simple and included just a tiny fraction of the features we use today. But it was enough to be slightly more useful than its competitors, and it was deliberately designed to be more interesting as you convinced more of your friends to join. Once it worked its way into people's lives, it had the foothold to expand into a communication tool, gaming platform and, most recently, an operating system layer for desktop and mobile web.

These examples offer a less intuitive recipe for success that any business can learn from: Think of distribution as a part of your product from the beginning, and don't try for world domination right away. Build the simplest and most useful version of your product first. Once you have a foot in the door, you might just be unstoppable.


How To Ignore The Data And Find The Answer

Businesses have access to more data than ever before. This isn't necessarily a good thing. As Zynga Product Manager Kenton Kivestu writes:

There is a risk of PMs / orgs / companies over-utilizing a “data-driven” approach to the point where decision makers neglect pursuing step-function changing ideas because the “data doesn't support it.” A healthy use of this data requires a keen understanding of when to ignore it.

One way to avoid the temptation of precision is to throw it out the window entirely. Engineers have long used a trick called a "back of the envelope" calculation to examine their assumptions and avoid getting bogged down in time-consuming, precise analyses. The idea is to pose a question and answer it by making assumptions and estimating rather than trying to find the exact number. Usually, you'll either realize that you're asking the wrong question or arrive at a result that's close enough to the right answer to make an accurate decision without devoting valuable resources to finding the precise solution.

This type of question is also called a "Fermi problem," after physicist Enrico Fermi, who famously asked his students at the University of Chicago to estimate the number of piano tuners in the city. But you could just as easily apply it to a data-driven business or product decision. For more information on these types of calculations, take a look at this engineering class presentation recently posted to Hacker News. Tellingly, most of the commenters on the thread are busy arguing about whether you could actually use this approach to find the exact number of piano tuners in New York City. Somewhere else, a smart engineer has cancelled plans for his piano tuner social network without needing to worry about whether Yelp's listing of 15 is complete.


Build A Simple First Version: With People, Not Code

Technology is not always the best solution, because technology is not always the simplest solution. That's a truism you might not expect from a software maker, but believe me, developers are lazy, and sometimes hiring an intern or two is easier than engineering a brittle automated solution. Still, we see time and again executives making decisions based on what scales better--before they've even reached scale! Thinking that machinery is always more "efficient," entrepreneurs seem to prefer paying for technology over paying for people, even when it's clearly an overly complex solution. CD Baby founder Derek Sivers summarizes the dilemma perfectly:

When everyone else is trying to automate everything, using a little human intervention can be a competitive advantage. The problem is when business owners see wages as a cost instead of an opportunity. But people are the best cogs for version 0, because their workflow can change with a few simple directions. That lets you test and iterate quickly on the processes that power your business, validating the workflow you've set up without introducing tons of overhead. The result maximizes income, quality, loyalty, happiness, connection, and all those other wonderful things that come from real human attention.

Netflix and Amazon were both famously un-automated for years, employing human beings at folding tables to sort and send physical media. Here's Evan Baehr from Outbox, the Austin-based USPS-killer, talking about the value (and challenges) of constructing a human-powered software service--and then automating it once it congeals. From this FastCo.Labs launch story:

Last summer we met with Andy Rendich, the COO of Netflix. When Andy showed up at Netflix eight years ago, they were shipping 5,000 DVDs a day--these were DVDs in cardboard boxes and on folding tables they got at Sam's Club. Andy taught us how to do lean hardware development. And, like every other element of lean startup, we only automate or develop hardware for process bottlenecks. He showed up [at Netflix] and they had a 15-step process from intake to reshipment, all manual labor. He applied a rigorous evaluation process: How accurate is it when a human does this? What's the cost when a human does this? What's the expense of a hardware solution and what's the payback schedule? That meant a very high bar for automation. He has a nice anachronistic counsel: be slow to automate.


Breaking down complex problems into simpler parts is preferable to tackling a large schema of problems at once. There are no panaceas in software. Take media startup Upworthy as one example. By some measures, Upworthy is the fastest-growing media website of all time, but they didn't get to 10.4 million monthly unique visitors by publishing the most content or developing a complex social media strategy or constantly staying ahead of the news cycle.

But that's not some top-down strategy or hypothesis. Rather, the site has simply mastered a trick engineers have known about forever: Rather than trying to do everything at once, break down the functions of your company into smaller goals. Then put those tasks in some logical order, and focus on one at a time. For Upworthy, job numero uno is enticing surfers to visit a page on the site.

Instead of trying to build a whole viral machine at once, they simply bit off the first problem: writing enticing headlines. Once they perfected that process with an ingenious (and non-automated!) workflow, they were able to focus on the placement of their sharing buttons, and other new features for their massive audience. This reductive approach is much less magical than you may think, which you can infer from the quizzical way they introduce their slideshow on 10 ways to make viral content:

You are dumb at the internet. You don't know what will go viral. We don't either. But we are slighter less dumber. So here's a bunch of stuff we learned that will help you be less dumb too.


The path to the best solution isn't always linear. Engineers at Fog Creek software knew they needed to support both GitHub and Mercurial in their Kiln version control software. The obvious answer was to build an options page and a wizard for conversion between the two systems. Instead, Fog Creek simply made Kiln work with both simultaneously. The solution may seem obvious in retrospect, but it required a specific kind of lateral thinking that engineers are uniquely good at. Rather than getting bogged down in how to build dual support on top of their existing product, Fog Creek skipped right ahead to thinking about what the optimal solution would look like. The resulting product is far more "awesome" and useful than the usual approach would have been.


Do what's necessary, not what's glamorous. That's the lesson from this excellent profile of Twitter board member Jason Goldman:

Startups are run by people who do what's necessary at the time it's needed. A lot of time that's unglamorous work. A lot of times that's not heroic work. Is that heroic? Is that standing on a stage in a black turtleneck, in front of 20,000 people talking about the future of phones? No. But that's how companies are built.


Stay Tuned For More As We Develop This Story

If you come across any more good examples, drop me a line on Twitter.

[Two Captains: Everett Collection via Shutterstock]

Twitter #Music's Awful iTunes Rank Belies Its Success

$
0
0

Is the "music discovery" term dead yet? Great! Now we can move on to just sharing music recommendations without worrying about all that pretense.

Call it what you want, but music has always been a locus of social activity--if people "discover" new music or bands in the process, all the better. Twitter knows this, as evident by Twitter #music product manager Stephen Phillips' remarks at SF Musictech Summit, when he said "We want to be a sharing experience" rather than a listening destination. So, while some people are quick to call the #music app and experience a failure, marked mostly by the App Store rank rather than on its merits, I can't help but see #music as nothing but a success.

When you use the Twitter #music app, you get the knowledge of what artists/bands and other Twitter users are mentioning and listening to in a mobile layout. Using the web version you get a little bit more information, as that's a sensible place to push extra content. There seems to be a lot of criticism around the fact that not a lot of people are tweeting through the #music app itself, but I don't think that matters. Whether people share their listening habits through the app or not, the data is still collected, making the app better.

This is a twist on the "network effect" phenomenon which has been possible only since mega-platforms like Twitter and Facebook have begun to spawn other products like Vine, Instagram, and Facebook's Messages app. But so far, Twitter #music is the only one of these apps to make use of its platform's power this way. In the traditional "network effect" scenario, the app gains some utility with every additional user. In this case, the Twitter #music app experience improves even if the number of users stay the same, because tweets created in other Twitter clients get sucked into #music, improving the experience.

The wealth of music data collected, processed, and analyzed by the new #music team should be worth the endeavor to Twitter even if the app only had 100 end users. Even if it's not directly through the #music app, people are still tweeting their favorite bands, trying to get people to listen to the new single. People are still watching the Grammys and tweeting about the upsets and victories. The #music app is just an attempt to organize that data and give users a sharing wrapper around their streaming service of choice. If a user has more insight into what others are listening to and sharing around them online, it may open doors to trying new music. That's how music works in the physical world. People talk about the music they listen to, they listen to it in groups, at parties--in this case, Twitter is just trying to catch the runoff from those conversations and collect it inside a dedicated app.

As for the actual app itself, there are always things that people will complain isn't yet a feature, but things like better integration of #music throughout all of Twitter, more artists available in the system, and more listening sources are features too obvious not to already be on a whiteboard.

So what if the Twitter #music keeps falling in the ranks? What if it falls to the 200th spot out of 500,000+ apps? Probably nothing. Twitter is also an advertising company in addition to being a social platform, and every ounce of data it has for different industries should only enhance its power to prove engagement and bolster its CPMs. The more it knows about bands, songs, and the music industry, the more accurately it places targeted ads in front of interested readers on Twitter. If someone discovers a new band in the process, call it a positive side effect.

Tyler Hayes contributes to Hypebot and does interviews for NoiseTrade's blog. He often writes about music and the impact tech is having on that industry, which can be often be found on his personal blog, Liisten.com. Tyler also runs the site Next Big Thing, which ranks user-submitted links for an interesting hub of music-related content.

[Image: Flickr user Nina Matthews]


Seeing Through Digital Eyes At The Biggest Book Expo In The Country

$
0
0

Book Expo America, BEA, used to be the place to go to get your hands dirty with printer's ink--all the books were hot off the presses. Now the annual industry gathering at New York City's Javitz Center has been increasingly invaded by geeks.

I'm one of these geeks, despite the fact that I've been employed in publishing almost as long as BEA has existed (it used to be called the ABA, American Booksellers Association Convention). My professional interests lean heavily (in a distinctly walrus-like way) toward all things digital.

The show floor has the usual suspects, including all the big five publishers sued by the U.S. Justice Department for alleged e-book price-fixing with Apple (Hachette, HarperCollins, Macmillan, Penguin, and Simon & Schuster). There are 1,000 other publishers, as well: university presses, independent publishers, literary publishers, religious publishers, and in the one segment of publishing that's growing, there are e-publishers.

It's a shift that started in the Fall of 2007, when Amazon dropped the Kindle 1 on an unsuspecting industry, that's five BEAs ago. Publishers have been in Chicken-Little Mode ever sense, "The sky is falling, the sky is falling!" Amazon's presence at the show includes Kindle Direct Publishing (KDP) and CreateSpace, its digital self-publishing platform and tools--it's all software services, no hardware.

The Business of Selling E-Books to Readers & Writers

I find it telling that neither Barnes & Noble nor its Nook subdivision, Amazon's #1 eBook-selling competitor, will be present. Perhaps this has to do with the rumored Nook buyout by Microsoft. Apple is also absent, which is no surprise, though combined sales of e-books through the iTunes and iBooks stores put Apple at #3 in e-book sales.

Meanwhile, the quiet Canadian, Kobo, is #4, and appears to be the most active of the e-book/e-reading device sellers at BEA. Unlike Amazon, Kobo is a completely digital operation--e-books only! Kobo sells its own e-readers: KoboMini, KoboTouch, KoboGlo, KoboArc, and their newest, the KoboAuraHD.

The free Kobo Instant Reader app, available for all major device types and operating systems, is not limited to reading e-books purchased from Kobo. It's an open platform that allows you to import e-books from anywhere--a distinct advantage over the Kindle apps and a good reason to favor the Kobo platform, generally.

Kobo also provides Reading Life and Writing Life from its website. The former is a social reading platform for customers. The latter is Kobo's self-publishing platform. Amazon and others have similar platforms, but not with such openness.

Perhaps the stealthiest, though it's no secret, aspect of Kobo's business model is that they have a more widespread e-book presence internationally than Amazon. This is partially due to their Canadian roots, but also to the fact that they are owned by the giant Japanese online retailer (and Amazon wannabe), Rakuten.

Publishers Remain Digitally Wary (and Ignorant)

There aren't many big-name software or hardware companies at BEA, which is a shame. There's a lot of mistrust between the publishing world and the tech world. I'd certainly welcome more cooperation along these lines, but I'm the trusting sort.

At the same time, while e-book consumption may be a consumer activity, BEA isn't a consumer show. The digital publishing service and tool companies present divide into two categories, neither of them particularly huge (though the dollar figures can be large).

Higher-end digital services for publishers:

In the future, we'll talk about some of the companies providing these services. They aren't exactly household names, but a few major players are beginning to emerge, including Aptara, SPi Global, OverDrive, Ingram Content Group, INscribe Digital, Integra Software Services Pvt Ltd, and the Perseus Book Group, the only one of these that is also a publisher.

Publishing remained a 19th century-style manufacturing business until Amazon started eating up all the ink. There are very few publishers with any sort of digital expertise, making this a consultants' paradise. There aren't a lot of companies in this category, but the number is growing rapidly, as are the services available.

What are they servicing? Well, publishers have to worry about things like digital copyright management (very different from the odious DRM copy protection schemes). Digitizing print backlists for electronic distribution is hard enough for most publishers, but figuring out what royalties to pay authors has turned into a logistical (and legal) nightmare. And these business concerns have little to do with the new editorial and production workflows required for dual digital and print operations.

These companies tend to fit into the status quo of publishing, offering help to bring old-fashioned methodologies into the digital age, but without upsetting established business models. Working with these vendors is like hiring IBM to build mobile banking apps: Big Banks don't mind spending big bucks, though I'm sure we all know half a dozen sharp developers that could do the same thing faster, cheaper, and probably better, too! (Contact me on Twitter, @HisWalrus, if you'd like to sign up.)

Lower-end tools and services for self-publishing:

This is the realm of startups, and a number of these have booths at BEA. But BEA isn't a technology show, and as such, there don't seem to be any major technology announcements planned during the expo. Here's a sampling of a few more well-established publishing startups at BEA:

  • Bluefire Productions--Best known for their Bluefire Reader Apps, which may be the most widely distributed free e-reading app available. Bluefire has turned their free app into a white-label e-publishing platform using the Adobe Content Server. (Adobe is not at BEA.) For publishers who want to sell content on mobile devices without a digital middleman, like Adobe or Kobo, Bluefire will build you a custom-branded app.
  • BookBaby--An independent, self-publishing platform for authors who aren't able to set up direct accounts with Kindle, Nook, Apple, or Kobo, or who need an easier way to produce e-publishable content. You can also set up print books for traditional distribution through BookBaby. Essentially, it's publishing without a publisher, but with the benefit of the BookBaby blogging platform, which is quite active.
  • Flipick--A downloadable plug-in for Adobe InDesign that converts InDesign documents into HTML5 with CSS3. You need to follow the Flipick design guides, but it's a fairly powerful tool for creating fixed-layout e-books (as distinct from e-pub formatted content, which reflows to fill your device's screen, regardless of size).
  • Vook--Originally a publisher of "Video Books," but now a full-featured publishing platform for multimedia e-books, aka rich content. In addition to their online tools for assembling e-books, Vook offers a slew of publishing services--Cover Design, Copy Editing, Book Scanning, Proofreading, Creation, Distribution, and Marketing--aimed at individuals and enterprise publishers, a new and potentially lucrative market.

I'm not too thrilled by any of these, with the possible exception of Vook. I don't see any groundbreaking new ideas that really take on big ideas. For instance, isn't it time to rethink the basics, like what is a book?! Perhaps that's more of an ivory-tower sort of question to ponder.

Conferencing with the Digitally Aware Publishing Minority

For the technology minded, the conference portion of BEA includes Digital at BEA: IDPF Digital Book Conference and Publishers Launch Conference at BEA. The latter is focused on "near-term practical and strategic solutions and tips to help manage the digital transition." Essentially, the experts speak and the industry insiders learn, which is an important part of moving the entire status quo of publishing to a digital-first model. It's very much a change-from-within approach.

For the uninitiated, it's hard to imagine an industry as tied to the past as publishing is: Think Gutenberg! Writing tools change, typesetting has become automated, presses have become incredibly fast, but publishing business models are still dictated by manufacturing: raw materials turned into a physical product that must stored and shipped. E-books have none of this legacy of overhead, which is why "the digital transition" is such a challenge.

A lot of publishers still don't know the right questions to ask, which has a lot to do with the preponderance of old codgers, like me, in the highest, decision-making positions at the largest publishers, not like me. What the old guard does get is the bottom line: fewer print books and more e-books sold, and even more striking: lower print profits and greater e-book profits. (If you want numbers, feel free to get in touch on Twitter @HisWalrus.)

The good folks at Publisher's Lunch, organizers of the Launch seminars, do a good job of covering this changing world, particularly Michael Cader, of Cader Publishing, someone I've known a long time. Cader is a pragmatist in a confused publishing world. As a publisher, do we:

  • Publish all new books in both print and digital versions (which raises all sorts of corollary questions)?
  • Convert some or all of the backlist (older and out-of-print titles) for digital distribution?
  • Charge less (or more, for enhanced digital editions) for the same titles in e-book or print?
  • Distribute digital editions first, since they're almost always finished before print books are available?

There are no absolute answers to these questions. It's different for every publisher and even for individual titles within the same publisher. There's still a lot of trial and error associated with "the digital transition," but we needn't fear such experimentation. It's the only way to establish new business models.

Getting Down to Nuts & Bolts with the Digitally Savvy

The IDPF, International Digital Publishing Forum, is the "Trade and Standards Organization for the Digital Publishing Industry." In fact, they are to the EPUB 3 standard for e-book files what the W3C is for HTML 5 and all the other Web Standards. For software developers interested in publishing, the IDPF is the place to be. Interestingly, the attendees to this portion of BEA also tend toward the younger end of the spectrum.

Publishing has never had a standards organization quite like IDPF; they haven't needed one. Book publishing was, and still is in large measure, about custom crafting. This gets us back to that electrifying moment five+ years ago when Amazon created Kindle, and they saw that it was good, though not too many in the industry were watching.

Amazon purchased a company called MobiPocket, which was responsible for the Mobi e-publishing format. Amazon turned Mobi into a proprietary format (shades of Microsoft Word's .doc propriety format), and it remains the native format for Kindles. However, Mobi has fallen out of favor everywhere else, and the rest of the e-reading world, with near unanimity, has lined up behind the e-pub standard. Take a look at the IDPF's membership roster, which is long and very complete, though with a single, glaring omission: Amazon.

We'll talk much more about the IDPF and EPUB as this story expands. For now, it's enough to know that without a strongly- and broadly-supported EPUB standard, digital publishing could become as easily balkanized as Web Standards in the bad-old-days of browser wars (are the browser wars over, yet?).

For example, there were loud jeers and howls of disapproval when Apple introduced its iBooks Author application. It creates non-standard e-pub files that can only be read on an iPad. Apple claims, with some justification, that it's iPad is the only device that can support all of the multi-touch features in iBooks Author, but even Apple diehards are troubled by this forking of the standard. It's too easy to imagine other manufacturers (did I mention that Microsoft is rumored to be buying Nook?) turning what is supposed to be a device-independent standard into a marketing bludgeon for their own advantage.

I haven't talked about the education market, which is also a large part of BEA and perhaps an even larger part of the digital book market. But many of the controlling factors in educational publishing have little to do with software and a whole lot to do with the way in which textbooks are bought and sold, and it's mostly not individuals who make the decision. We'll save this for future discussion, as well.

For now, we can look to BEA and see if there are any discernible shifts or new directions indicated after the four-day gathering. It's a big show, and most of the focus is on blockbuster authors and titles and the big deals being made for next year's books. In this way, nothing has changed. Professor Walrus wishes it would!

[Burning Book: Inga Dudkina via Shutterstock]

How GitHub Uses "Deprivation Testing" To Hone Product Design

$
0
0

In an oddly furnished room in their office, I sat down recently with Chrissie Brodigan, design and UX researcher at GitHub to talk about how designers and developers can measure which features are most vital by removing them, and seeing how upset their users get.

Can you tell us about deprivation studies in theory?

In theory, what you do first is give your users or your customers something new to play with. They get familiar with it and they start to develop patterns around it. You learn about how those patterns are working and you learn about the things that, if it is an interface change, what are they experiencing in the new interface? How is the usability? What do they like? What don’t they like?

How you are using feature deprivation here?

We take it over the course for a few days, because on day one, change is always hard. But after a few days, users start to get use to their new surroundings--and then you take those new surroundings away from them. On that last day, we consider that the actual deprivation study: You are putting the old thing that they were used to back in front of them. Then you measure the emotion around those three days of changes. Are they disappointed to have the old thing? Do they miss the new thing?

What questions are you asking yourself when you design these studies?

We want to know what can we learn about that experience, and the trajectory of the user over this new thing we introduced. Maybe the feedback is, “It was hard to use at first,” or “I was frustrated” to “I got adjusted to it. It was not so shocking,” to “Oh man! Now it’s gone.” What is the pattern that develops there?

How long have you been testing this way?

This is a brand-new program. Actually, our cameras just arrived a couple of weeks ago so we are actually going to be doing some of that testing where we are not only capturing what is on screen, but we’re also capturing what’s going on with facial expressions.

How do you do deprivation testing without a fancy camera setup?

Through a technique called a diary study, where you roll out the new feature and you create a diary. We actually used our own tools to do this. This turned out to be a really interesting way to do it, because I just created one repo and the entire group participated in that workflow. Right away the designer who is working on a product is able to keep up with it in real time as well, which made it fascinating. Usually, when you do a diary study, you have to wait for the person to actually bring it back to you at the end. This way, actually we got to gauge the response over a few days, so that was really fun for us.

Do you test on employees or do you bring in people from the world?

We always pilot internally first. [GitHub] has had some real success building for what we need, and by doing our pilot studies, we get to still keep up with what we need. But you also want to go out there into the world and actually get that contact with people. What we've learned from that is, as GitHub has gotten bigger, we actually have all of these new types of users, like people who are not coders, for instance. We have them represented in our own workflow here, so we are building for ourselves, but we are also changing internally, and this makes the internal testing really fun and complicated. I would say our empathy levels are increasing.

Where did you learn the concept of a deprivation study?

From Mozilla, actually. I came to GitHub from Mozilla. That was the first time I had ever done one of them before, was working at Mozilla. I almost feel like we were raised that way, right? Our parents give us toys and we enjoy them and then something happens and they take them away. Deprivation studies actually happen a lot in real life, right? People are always experimenting with things or you might try let’s say, a new type of olive oil, and then you run out of it, and you might have some other olive oil in your house, but you’re really disappointed, because you really miss that new olive oil. You develop this sensation like “Wow, I really liked that thing; that thing that I enjoyed.” It's the same feeling when you break your phone, right?

What's the underlying principle here?

When you do not have access to the thing that you need, you actually end up learning a lot and you learn a lot about what are the habits that people form, and the emotional response around that. Something great about deprivation studies is when you hear from the user "I did not really miss the thing that you gave me. It did not really matter either way." It might sound disappointing, but maybe it's a good thing. Maybe we were trying to be subtle in the change that we were making, and it was too subtle? Or maybe people didn’t notice that anything changes. Maybe really the change is not as provocative as we had thought it would be. People, in their diary studies, would write things like, “I’m so sorry, but I didn’t really notice anything.” That is great feedback.

How long do you usually do these studies for?

With diary studies, I try to do them between three and four days, at three days for getting a user to develop a pattern around something, the fourth day being the deprivation. The fifth day of the week is when I compile all the data into a report that we share.

Does this work better for some sorts of features or what is the best use case?

The best use case is when we have a new feature or a core change to the UI and we want to get that out into your hands. We’ve got to figure out how we include people who are brand new to the service and people who are sort of veteran users of the service, so we can also gauge what a new user thinks about this thing we changed. In the future, after we launch new features or different types of UI, we would want to wait for a short period and then grab people who are brand new to the service and see from a usability standpoint, if they are just learning and they’re not unlearning an old habit.

Is this something that is useful mostly for controls and interactive things, or can you do this with, say, branding too?

You can definitely do it with all kinds of design. One of the things that came up in this recent study is we used different colors. Color turned out to be a real hot button in this study. I am just compiling all the responses now and I am noticing that people universally had a strong reaction to a particular color that got used in this interface change. It’s interesting too because the other problem with “easier methods of testing,” like split testing or whatever, is that you just don’t get any feedback--and so it’s really hard, because you can never compartmentalize and test a billion things at once. You end up with a conclusion like, “People don’t like this, but I don’t know why.” In this case, it is like “Actually, the design is way better, but that color is too strong.” Then at least you hear it in that context.

Are there other tips or tricks that people should know when trying this form of testing?

Yes. I would say make sure that you give a user enough time to develop a usage pattern around the thing. The study that we did was four days, but I actually launched it on a Friday, so that it was less intrusive into a Monday workflow. They had time over the weekend, but I did not count that towards the four days. It let them settle in and actually experience it under a time constraint. It was a four-day study, but we actually built in seven days, so that they could take their time. They did not have to get stressed out about it and do not let it go on too long. Three days, in my experience, is enough for a pattern. Stay out of the way during those days. When users are giving you feedback in daily diaries, do not get hung up on day one--look forward to what the story is going to tell you at the end and just enjoy the ride. The deprivation part, that’s the gift, the thing that is the exciting part of this study. Just get excited about what is it going to be like after that.

[Image: Flickr user whyamiKeenan]

Google Is Learning How Smartphones Impact In-Store Shopping

$
0
0

Seventy-nine percent of smartphone owners are what Google calls “smartphone shoppers,” meaning that they use their smartphones at least once a month in stores. That’s a lot of people. If there are 130 million smartphone users in the U.S., then about 111 million Americans use their smartphones to prepare for shopping or to look things up while there. It’s not surprising that Google wants to get in all those heads and see what’s going on. There’s money there!

The Google Shopper Marketing Agency Council worked with M/A/R/C Research to assess what people are doing when they use their smartphones to get ready for shopping or while in a store. During Q3 of 2012 they conducted qualitative studies to get a sense of overall trends and then used surveys completed by 1,500 smartphone users to cull data. The report only looked at how smartphones impact in-store purchasing and did not extend to people online shopping on their smartphones.

The most surprising finding was probably that people who consistently use their smartphones as part of their shopping spend 25% more than people who only use their smartphones for shopping occasionally. For example, in the category of health and beauty, sporadic users bought $30 worth of products on average, while more intense users averaged $45 in purchases. In the appliance category, the former group averaged $250 in spending, while the latter spent $350.

Less surprising were the observations that people largely use their smartphones for price comparison, and that they are increasingly answering product-related questions by looking up information on their smartphones instead of asking salespeople. Shoppers were especially likely to “self-help” when buying appliances and electronics even though these products tend to be expensive and potentially confusing. Though Google offered no analysis of the observations in the report, it makes sense that customers might steer clear of salespeople when they are making major purchase decisions and want to reduce pressure or biased advice.

From Google’s perspective, the most important finding was probably that a vast majority of shoppers using smartphones, 82%, said they looked for information through search engines. 62% navigated to store websites and 20% went to deal aggregators. And shoppers preferred mobile sites to apps 65% to 35%.

Google highlights the way these data can inform marketing and strategy decisions, emphasizing a strong online presence for brick and mortar stores, and the importance of considering showrooming in pricing decisions. These conclusions obviously imply using Google services, like Search, Maps, and Google+, as methods of engaging smartphone shoppers, but the data can be valuable no matter what. One concern about the report, though, is that it may not actually be generalizable to all smartphone users, since an increasing number of consumers are relying on online over in-person shopping overall. A customer may be a heavy smartphone user when he or she goes to a store, but may not actually go to stores very often.

[Image: Flickr user lyzadanger]

The Geek-Boy Irony Behind Mark Zuckerberg’s Tech Lobby

$
0
0

It is striking to see the efforts of Mark Zuckerberg and his other lobbying friends juxtaposed with the grim job market that even many college graduates are facing today. Today's young people are the most educated generation ever with the highest levels of college attainment this country has ever seen. Why is it then that hot Silicon Valley corporations are struggling to fill attractive jobs?

Code.org, the high-profile industry effort to push for more computer science and programming in K-12 education, correctly highlights the growing need for programmers and the dearth of educational opportunities to learn to code. But the reasons for the high-tech talent gap run deeper than a simple lack of curricular offerings.

The statistics on the code.org site are silent as to issues of diversity and equity, and fail to point out that coders are overwhelmingly white, Asian, and male. If almost all non-Asian minorities and women feel that coding is not for them, we have reduced our potential pool of high-tech talent to a fraction of the population.

Despite a deserved reputation for progressiveness, the tech sector is highly exclusionary to those who don't fit the geek stereotype--and this tendency is getting worse, especially in Silicon Valley. You might have heard, based on 2011 numbers, that only 25% of the U.S. high-tech workforce is female, and the percentages have been in steady decline since the '90s. The numbers for minority women are even more dismal. Hispanic women represent 1% of the high-tech workforce, and African-American women don't fare much better, at 3%. The better the jobs, the lower the proportions are of women and non-Asian minorities. Despite the diversity of the population of the region, Silicon Valley, which boasts the highest salaries among tech regions, fares much worse than the national numbers.

Even without explicit discrimination, we see women and non-Asian minorities absent from even entry-level tech jobs. Harper Reed, the CTO of the Obama campaign platform for his 2012 re-election, had both a mandate and a strong personal commitment to hire Black, Latino, and women coders. Despite his best efforts, he was not able to fill his targets. “The campaign needed to represent America, and we weren’t able to do that within the engineering team,” he said at the time. “I was able to talk to huge organizations that were involved in getting women and non-Asian minorities into technology, and they weren’t able to help us. It was incredibly frustrating.” The irony is both huge and unmistakable: A geek culture that purports to embrace values of diversity and inclusiveness has not done enough to evangelize to women and non-Asian minorities. Even if efforts like code.org and aggressive recruitment by insiders like Harper Reed are successful, it is unlikely we will see coding expand beyond White and Asian geek boy culture without a concerted effort to address issues of diversity and equity head on.

For two decades, I have studied the unique characteristics of geek learning. Unlike learning in more established fields, geek learning is highly dependent on informal, problem-driven, and peer-to-peer social learning. Geeks often have a hostile relationship to formal education. Rather than sit through a pre-programmed curriculum with problems and solutions laid out in advance, geeks like to tinker and hack to solve new problems and innovate. While classes can be a source of important practicums and skill development, true success in the geek world is not conferred through seat time and formal credentials but by a track record of identifying and solving interesting new problems in new ways.

My 12-year-old son is half White and half Asian growing up in a household with both parents involved in high-tech research. He will not have an opportunity to take a single coding class in school, and yet he is already a competent Minecraft hacker, loves building things in Maya, and has done programming experiments in Scratch and basic. His coding and hacking interests have grown mostly through an iron-clad cohort of geek and gamer friends. Unlike his diverse friendships formed through athletics and other interests, all of his geek friends are White boys, with parents who are entrepreneurs or creative professionals.

Recruitment into the life of a coder happens well before kids walk into the classroom. The peer groups that young geeks form are as critical to their learning and development as tech experts. Kids become coders because they are friends with other coders or are born into coder families, which is why the networks can become exclusionary even when there is no explicit racism and sexism involved. It’s about cultural identity and social networks as much as it’s about school offerings or career opportunities. Kids need to play and tinker with computers, have friends who hack and code together, and tackle challenging and new problems that are part of their everyday lives and relationships.

We know that the more diverse the ecosystem of talent, the more innovative are the solutions that result. If we really care about the talent gap in high tech, innovation, and entrepreneurism, we need to do more than look overseas, or push classes and school requirements at kids. We need to build a sense of relevance and social connection into what it means to be a coder for a wide diversity of kids. Groups such as Black Girls Code, Mozilla’s Webmaker Mentors, Urban TxT, a growing network of makerspaces in diverse communities, and the vibrant community of young Scratch programmers point to ways in which girls, and Black and Latino kids can be recruited into coding culture and social networks. The talent is there to be developed if we can diversify our imagination of who belongs to the coding inner circle, and how we might invite them to join.

Mimi Ito is the John D. and Catherine T. MacArthur Foundation Chair in Digital Media and Learning at the University of California, Irvine.

[Image: Flickr user Thomas Leth-Olsen]

Why Apple Needs A Brand Overhaul

$
0
0

As a market leader, Apple finds itself in a position that would have seemed impossible to dedicated Mac-heads two decades ago; a strange parallel universe in which the band that sells the most also happens to produce the best music. However, as streamlined as Apple is for a large company, great success and a certain middle-agedness has crept into their marketing.

Logic would say that this is the moment for other companies to emulate the Apple of 1984: presenting themselves as sledgehammer-wielding freedom fighters seeking to liberate users from a brushed-aluminum elite. As many will have noticed, this tried-and-tested approach is exactly the one Samsung has taken as of late. They're considerably outspending Apple on its phone advertising campaign in a concerted iNeedling effort, which rings strongly of the kind of approach Apple itself might have taken in years previous. “It doesn’t take a genius,” sniped one of Samsung’s advertisements for the Galaxy S3, lampooning the ill-received “Genius” spots launched (and ditched) by Apple in mid-2012.

It may not be quite the ideological counterpart to the original “1984” Macintosh commercial, but as a B-side to the “Think Different” campaign (which itself satirized IBM’s “Think” slogan) it fit nicely enough. Even more Apple-esque was the Samsung Galaxy commercial which preceded the launch of the iPhone 5. It depicted a long line of drone-like Apple fanboys outsmarted by the hipper, more irreverent Samsung user. It wasn’t steeped in quite the same dystopian imagery as Apple’s Orwellian classic, but it still made the same overriding point.

This isn’t to suggest for a moment that Apple’s advertising post-Steve Jobs has been bad. Zooey Deschanel’s stoned-looking Siri ad might have been mocked in certain corners, but it still nailed Apple’s target demographic and created buzz. What can’t be said, however, is that the advertising has been especially memorable; certainly not the rabble-rousing stuff many expect of a company that has writ its world-changing ambitions so large. What it more closely resembles, in fact, is Microsoft, whose celebrity endorsers used to smack consistently of trying that bit too hard. They've since mostly ditched the celebrities for the man in the street.

The anti-IBM stance Apple took in its early years (and, to an extent, the ongoing battle with Windows) was an extension of the mass society critique that had been warring on for decades before the personal computer came in to being. To quote historian (and technology critic) Lewis Mumford, the machine Apple was raging against was the high-tech equivalent of, “uniform, unidentifiable houses ... inhabited by people of the same class, the same income, the same age group, witnessing the same television performances, eating the same tasteless pre-fabricated foods, from the same freezers, conforming in every outward and inward respect to the same common mold.”

To Mumford, mass society didn’t just mean that everyone had the same products; it meant that everyone had the same bad products. However, when it comes to personal computing and consumer electronics, that is no longer the case. By creating a category of product more in line with a luxury car or a designer clothing brand, Steve Jobs raised the game for everyone.

Having recently been named the world’s most valuable brand for the third year in a row, it would be foolish to suggest that Apple requires anything so drastic as a branding overhaul. At the same time, as more rival companies than ever try to think (and act) like Apple, genuine attempts to think, if not differently, then certainly differentially become more of a challenge. There exists today an entire new generation of consumers who have never known Apple as anything other than the electronics company that led the market; whether that was MP3 players with the ubiquitous iPod, smartphones, or tablets.

Earlier this year, Tim Cook said that, “Our core philosophy is to never fear cannibalization. If we don’t do it, someone else will.” He may have been talking about Apple’s product lines, but he is also speaking about a company whose identity has always been one of its most tradeable commodities. It might not require the kind of shake-up Apple did during its mid-1990s low, but would a well-placed sledgehammer at Apple’s values really miss?

Because if Apple doesn’t do it, someone else will.

Luke Dormehl is a journalist, author, and award-winning documentary filmmaker. With a particular focus on technology, cinema, and pop culture, his writing has appeared in dozens of online and print publications, while his films have been screened at the Cannes festival and on Channel 4. He is the author of The Apple Revolution: Steve Jobs, The Counterculture, And How The Crazy Ones Took Over The World. You can follow him on Twitter @lukedormehl.

[Image: Flickr user Karen Blaha]

Why Shazam Is A Force For Good In The Music Business

$
0
0

If you’re still paying for tickets to see live shows, pat yourself on the back: You’re a supporter of yet another declining part of the music industry. Even Justin Bieber had to cancel a show in March due to empty seats!

Unfortunately, however, Ticketmaster doesn’t seem to care too much about ticket-buying music fans. Last month, the company settled a $23 million class action lawsuit in federal court for their sketchy “rewards program,” which offered little to no perks and a lot of hidden costs.

Here's what they did: After buying a ticket online between September ‘04 and June ‘09, Ticketmaster customers were immediately enrolled in a rewards program that cost $9 a month--reportedly with no signup notifications. Plaintiffs in the case argued they were unaware of the fees (which were charged to the credit or debit card used for their ticket purchase). Ninety-three percent of those who enrolled in the program didn’t redeem any of the online coupons offered by the rewards program. When all was said and done, Ticketmaster’s fooled patrons paid about $85 million--or $75.89 each--for the program….taking the average person eight months to cancel the ongoing payments.

Again, Ticketmaster?

This isn’t the only time a class action lawsuit has been filed against Ticketmaster. In 2012, consumers accused the company of profiting from bogus processing fees and were entitled to a refund of $1.50 per ticket (for up to 17 ticket orders).

Customers affected by the deceiving rewards program are entitled to a refund…kind of. In the spirit of continuing to screw over musical patrons, Ticketmaster’s refund only returns up to $30 to each customer. And if too many people sign up for this refund, payout may be even further reduced because the settlement caps payout at $23 million (including $4 million in legal fees). Let’s do that math, or at least the easy part of it. The average customer is out $75 bucks and will only be returned $30. That’s less than half what they are owed. Also, since Ticketmaster made $84 million from the scandal, they're still turning a profit.

Music fans and musicians alike are tired of much-hated, rarely innovative companies like Ticketmaster exploiting and depersonalizing their love for the arts. Buyers despise the hefty service fees and surcharges while artists resent the monopolization of the market and their inability to obtain contact information for their fans. Unfortunately, the 2010 Ticketmaster/Live Nation merger leaves little alternative. Even StubHub and TicketsNow are owned by the giants--TicketsNow by Ticketmaster and Stubhub by eBay. It's an ugly, impersonal business.

Shazam, The Model Citizen

The days of robust fees, overpriced music, and hidden catches might be far from over, but new inventive music entities like Shazam seem to have the right idea. Around the same time Ticketmaster was increasing its infamy, Shazam announced that their music discovery app is responsible for one out of every 14 paid song downloads. Last week, when comparing Shazam’s yearly gross of $300 million to worldwide download stats, Digital Music News estimated the company contributed a total of 7.2% of the $4.1 billion global gross of downloaded tunes.

If you’re unfamiliar with Shazam, the app (which ranges in cost for free to $4.99 a year) uses your smartphone’s built-in microphone to record a short sample of any music being played, then compares that sample to a central database to find a match. Once your match is found, information about the artist, song title, and album appear on your screen as well as links to purchase the music.

The rate at the Shazam team is innovating is incredible. On May 23, the company released a feature called "auto-tagging" that lets your iPad listen and log every song that's playing around you, whether it's at a bar or on the TV across the room. After songs are identified, they're automatically sent to a queue in the app. The coolest part of this update is its ability to run in the background of whatever you’re doing, regardless of other open apps--it literally captures the soundtrack you're living in.

Shazam EVP David Jones says he expects Shazam's share of song purchase conversations will double every year. Make it easy for fans, give them features and experiences that truly amaze them, don't rip them off, and yes! People will actually pay for music.

Jackie Shuman (@jackieprobably) is a music supervisor and the East Coast Director of Platform Music Group. She specializes in film, television and commercial placements for independent artists.

[Image: Flickr user Jason Empey]

Why Aral Balkan Thinks iOS Can Still Compete

$
0
0

Respected iOS designer and developer Aral Balkan is currently working on several projects, including an experiment with a personal drone that follows you around and helps you out in various ways, a new unobtrusive template engine for JavaScript called Tally, and a Mac app that makes it stupidly simple to curate and share information. You can find out more about him on his website and you can follow him on Twitter @aral. Interested in teaching your kids how to code? Check out Code Club.

Since its introduction in 2007, iOS has widely been considered the shining example of what a mobile OS should be. But we all know how much competition has heated up. With Android moving ahead at a breakneck pace, is iOS still the best?

When iOS was released, it was the best mobile OS on the planet. However, since its inception, things have changed. iOS is no longer just a mobile OS--it is part of a greater ecosystem that includes OS X. And that’s how we should be judging it. Mobile devices do not exist in a vacuum where we use them to carry out a set of dedicated, self‐contained tasks. Instead, they plug into our lives to help us with tasks that can span multiple devices and both the tangible and virtual worlds. My mobile phone might have a great contacts app, for example, but that is useless if it doesn’t also automatically synchronize those contacts seamlessly to my other devices. (Manual sync just doesn’t cut the mustard any longer.) The Twitter app on my tablet may be great but can I continue reading my timeline on my computer or phone afterwards? This is where iOS and OS X really shine and it was the introduction of iCloud that has made this possible. So, is iOS the best mobile OS on the planet today? Yes. But it is more than that: It is the best continuous‐client OS out there today and that is even more important.

Speaking of iCloud: Some developers say it's a nightmare. Is Apple going about the cloud totally wrong? Is someone doing it better? How should it be done?

iCloud is awesome… when it works. Unfortunately, for a service that should provide a seamless continuous-client experience between all your Apple devices, it currently doesn’t work as well as it should. And it is a bitch to develop for. For my own app, I’m building my own seamless synchronization system from scratch because iCloud doesn’t do everything I want it to (while simultaneously doing too much) and because it’s actually easier for me to do that than to use the atrocious APIs that iCloud currently ships with. This is one area in which Apple has a long way to go. I wouldn’t say it’s as bad as the Maps fiasco but it is a somewhat distant second.

But what's the worst thing about developing for iOS?

I really love Xcode and I don’t have any problems with the development tools per se. However, the weakest link for me is the App Store model. For one thing, it needs free previews (or apps that have a preview period). Go ahead, try out my app for an hour or a day and then decide if you want to buy it. You can’t do that today and I feel that that lets users down. I also feel that it pushes users towards free apps, as they don’t want to risk their money on something they haven’t tried. The other thing Apple must do is to allow developers to use in‐app purchases for service subscriptions. The really interesting apps today all have a server‐side component and that’s an ongoing expense for developers. It makes no business sense whatsoever to sell a product at a fixed price if you have ongoing costs to maintain it. Allowing developers to sell service subscriptions via in‐app purchases will fix this. As far as I am aware, Apple is allowing some partners to experiment with this so I hope it won’t be long before we see it opened up to the greater developer community.

Is there anything Apple can learn from Windows Mobile or Android?

Android has a very different business model. Google took Microsoft’s model with Windows and implemented it for mobile and that’s how we got Android. It really is the Microsoft Windows of the mobile world. Google has never had a consistent design vision for Android. As such, I don’t think there’s much that Apple can learn from them on that front. If anything, Apple needs to match the stability of Google’s web services. Windows Mobile--although it brought with it a fresh visual language--is not a great continuous client platform. Your phone doesn’t seamlessly integrate with your other devices as do Apple and Google’s offerings. Even the visual language--although flat design is all the rage recently--has its problems (affordances are lost and landmarks are weak). As such, I don’t think Apple has anything to learn from Windows Mobile.

You've heard about the rumored black and white "flat" overhaul Jony Ive is doing to iOS. If true, is this necessary?

As Steve Jobs famously said, ‘Design is not just what it looks like and feels like. Design is how it works.’ In fact, how something looks and how it works are complementary: How something looks can tell you a lot about how it works. The canonical example is that of a door handle: If it is shaped like a knob or a handle that you can grip, you instinctively know that you should pull it. If it is a plate, you know--without instructions--that you should push it. In other words, the way a thing looks tells you about how it wants to be used. In design, we call this an affordance. Where we have problems is when an object creates an expectation because of its appearance that it does not meet in behavior. We’ve all seen the door with the handle that you instinctively pull but which you have to push to operate. A good example is the calendar app on OS X: It looks like a paper calendar and thus creates the expectation that you should be able to turn the pages in a natural manner. And yet you cannot (you can, however, on the calendar app in iOS). This is also an example of skeuomorphism--making an object mimic another material. I’m saying this because I would be very surprised if Jony Ive is changing the way iOS looks separately from how it works. If anything, he will be evolving the two in tandem. That said, there some skeuomorphic elements in iOS that create expectations which they do not meet and which, it would be safe to assume, will have their affordances tweaked in the next update.

Looking at iOS, Windows Mobile, Android, and Facebook Home, are there any similarities that can be drawn from them that say, "This is what a mobile OS needs to be successful?”

For a mobile device to be successful today, it must integrate seamlessly into your life; into your existing data, networks, and workflows. It needs to provide a great continuous-client experience. We can no longer judge the value of a mobile device on its own merits alone. It is the holistic experience that matters. So those companies that have the greatest control over the whole experience will be in a position to deliver devices that provide the best experiences. Currently, Apple is unique in its design‐led approach and control over nearly the whole experience. Google has such control only over the devices that it itself ships, not those that are customized by manufacturers or careers. To be successful today, you need to "make the whole widget" as Jobs used to say. Because the weakest part of a user experience will undoubtedly be the bit that is not under your control. So if you want to know who is going to create a great mobile device, you need to look at the organization. Is it design-led? Does it have control over the whole experience? If it ticks those boxes, you’re off to a good start.



Tell me about Breakingthin.gs? What is a "personal reboot narrative"?

Last year, I decided to simplify my life, regain focus, and to document the process on a hand‐rolled blog that I called Breaking Things. Making the blog itself (from scratch, without using a blogging engine) was part of the process. I called it a reboot because I was, in effect, rebooting (like a computer). When a computer reboots, it clears its memory and gets a fresh start. That’s what I tried to do and it was a great experience. Sometimes you just have to stop, take a look around you, and reevaluate where you are and where you want to be. To take a look at the world with a fresh set of eyes. In fact, I would argue that you should do this not just once in a while but on a regular basis.

I’ve heard you say before, "We have to realize that a bus information app is not a bus information app; it is an object that affects and alters people’s lives, either for better or for worse." You seem to think that apps have a cosmic significance. Isn't that a bit far-fetched?

Not at all. If you think about it, we all have a tiny bit of time in this world and then we die. And we spend that time having experiences. Experiences with people and experiences with things. It’s the quality of these experiences that determine the quality of our lives. So consider for a moment just about how much of your life today is spent interacting with things as opposed to people. And now think about how those experiences make you feel. Do they empower and delight you or do you end up getting frustrated and angry at things that either don’t work or don’t work well? We live in a designed world. As designers--as the people who craft experiences--we have a profound responsibility to not take for granted the limited time that people have in this world. We have a responsibility to make that time as painless, as empowering, as beautiful, and as delightful as we can.

You’re a director of Code Club. What is that?

Code Club is a network of volunteer‐led after‐school coding clubs in the U.K. that was founded last year by Clare Sutcliffe and Linda Sandvik, and currently we have close to 800 clubs around the U.K. with thousands of kids learning to code. (And we’re always looking for new volunteers and new schools, so if you want to help inspire the next generation of makers, do come join us.)

Here in the U.K., a few MPs say that all children should be taught to code--that it's as important as learning to read and write. What do you think about this?

When I was seven years old, my father brought home an IBM PC and a BASIC manual and set them in front of me. Then, he told me a very important thing: He said, "go ahead, play with it, you can’t break it." (I did, much later, prove him wrong by spectacularly blowing the computer up after tinkering with it, but that’s a different story.) It’s because of him that I’m where I am today, doing what I do. I started out making simple games in BASIC--star fields, star ships… the thing is, if you can create your own universe at seven that’s the sort of magical spark that stays with you for your whole life. Today, I try to ignite that same spark in the next generation as one of the directors of Code Club. Coding--along with design--truly is the new literacy.

[Image: Flickr user William Hook]


Did Apple Preview iOS 7 In Their New WWDC App?

$
0
0

The event that is the equivalent to Christmas for an iOS developer is less than one week away now. I’m of course talking about Apple’s Worldwide Developers Conference, or WWDC. And because this WWDC will see the unveiling of a Jony Ive-designed iOS 7 and OS X 10.9, it’s probably safe to say it’s the most anticipated developers conference since Apple announced it would allows third-party apps on the iPhone back in 2008. So what do we know about WWDC 2013 so far--and, more importantly, about iOS 7?

At last month’s AllThingsD conference, Tim Cook confirmed that this WWDC will show off iOS 7 and OS X 10.9 and admitted that the rumors of a Jony Ive-inspired iOS were true and that this next iOS to be shown at WWDC was all his work. "Jony is really key," Cook told AllThingsD's Walt Mossberg when asked about Ive’s involvement.

From a company as tight-lipped as Apple, such an acknowledgement was a rare revelation. What Cook didn’t elaborate on was if the rumored “black, white, and flat all over” revamp to iOS 7’s design were true. That tidbit comes to us from Mark Gurman at 9to5Mac:

Apple Senior Vice President of Industrial Design Jony Ive has been leading a thorough overhaul for iOS 7 that focuses on the look and feel of the iOS device software rather than on several new features.

Sources have described iOS 7 as “black, white, and flat all over.” This refers to the dropping of heavy textures and the addition of several new black and white user interface elements.

And what are we to believe from those rumors? Apple’s release of the official WWDC app today might have given us some clues as to their validity. As Christina Bonnington writes for Wired:

The app features simple black text over a white background and nary a gradient or texture in sight. The app icon is perhaps the biggest change from Apple’s current mobile app lineup. It’s far more simply designed than Apple app icons like Keynote or the Apple Store app.

If I were a betting man, I’d say Christina’s contention is correct. Ever since OS X 10.0, the keen-eyed could always spot future design trends in the OS by looking at small updates to OS X apps. For example, iTunes will often get a tweaked UI element--like a scroll bar--before it’s rolled out to the rest of OS X in the next major update. I think it's safe to say the same could be true for the UI element changes found in the WWDC app.

But as cool as a new look to iOS might be for users, it could cause potential headaches for developers. Darrell Etherington of TechCrunch explains why:

That means a new look for app developers trying to achieve a “native” look on the platform, which could actually result in a lot of work for some to bring their apps up to spec. Lately, it seems like a lot of app developers have deviated from strictly copying Apple’s iOS design principles, however, and offered their own take, which seems to involve more and more flattening of visual components. But even slight changes can result in big headaches for designers trying to achieve a certain effect.


This is an ongoing story, so we're adding to it as news rolls in. Read on to learn why we're tracking WWDC. Or skip below to read previous updates.


What This Story Is Tracking

Facing intense competition to the iPhone and iPad, this WWDC is particularly make-or-break for Apple. The company needs to hit a home run with iOS 7. Here, we’ll track all the news about what the OS will look like, how it will work, and all the new SDK and API goodies open to developers. Be sure to check back often for all your pre and post-WWDC coverage and discover what the future of iOS development means for you.


Previous Updates

[Photo by Flickr user Yutaka Tsutano]

Tracking: Drones And The Law

$
0
0

11:00 a.m., 06/04/2013

An API For Military Drone Strikes

Application Programming Interfaces (APIs) are tools that, among other things, let websites and apps include information from outside content partners. Josh Begley, a 28-year-old New York University student, just unveiled Dronestream--an API which lets developers analyze, sort, and parse data on all reported U.S. drone attacks in Pakistan, Yemen, and Somalia. "With a simple browser request, designers and developers can build data visualizations about covert war," Begley says on his website.

Data for the Dronestream comes from the Bureau of Investigative Journalism, whose military UAV documentation efforts were discussed further down this tracker. In an interview with David Axe of Wired's Danger Room, Begley discussed a few trial programs he came up with using data from the API. These included a search function for drone fatalities and another which "assembles every covert drone attack on a site, hides them behind numbered blank tiles, and lets you filter through the various years and countries where these attacks happened."

12:15 p.m., 05/31/2013

U.N.: Don't Build Killer Robots!

Christof Heyns, the United Nations' special rapporteur on extrajudicial, summary, or arbitrary executions and a noted critic of military drone programs, has made an unusual request to the world's militaries: Don't build killer robots.

In a meeting with the Human Rights Council in Geneva, Heyns called for a special conference on autonomous, independent robots--whether on land, in sea, or in the air--that possess the deliberate ability to select and kill targets without human supervision. The United States, Great Britain, Israel, and South Korea are all working on algorithms and autonomous robots that are widely considered to be predecessors to actual killer robot prototypes.

"My concern is that we may find ourselves on the other side of a line, and then it is very difficult to go back," Heyns told Nick Cumming-Bruce of the New York Times. These systems are commonly referred to as "lethal autonomous robotics" in academic and military literature.

1:30 p.m., 05/29/2013

Texas Banning (Most) UAV Use

Texas is gearing up to become the first state in the nation to ban most recreational and commercial UAV use. A new bill (PDF) would ban the private use of drones that photograph individuals or private property "with the intent to conduct surveillance." Most popular retail UAVs such as the AR.Parrot include camera options.

According to Ars Technica's Joe Mullin, who is tracking the story, the restrictive Texan bill passed the state's legislature and is now awaiting signing by Gov. Rick Perry. The bill gives police the authority to fine drone owners $500 if their quadrocopters are used for photographing someone else's property; the fine can be waived if the photo or video is destroyed.

A number of loopholes, however, exist in the Texas UAV bill... And these indicate where lobbyists have been spending quality time with state lawmakers. Exemptions exist for images made "in connection with oil pipeline safety and rig protection," aerial photographs by real estate agents of property for sale, and use by law enforcement. An additional provision in the bill gives wide latitude for surveillance drone use within 25 miles of the U.S.-Mexico border.

[Image: Flickr user Quadrocopter]

U.S. Drone Strikes In Pakistan Taper Off

Over in a long New York Times piece by Declan Walsh and Ismail Khan on the drone assassination of a Pakistani Taliban leader on Wednesday, May 29, the reporters hid something interesting deep in the story's depths: The number of fatal U.S. drone attacks in Pakistan sharply declined in 2013.

According to the Bureau of Investigative Journalism, a left-leaning London group which monitors U.S. drone strikes, there have only been 13 drone attacks within Pakistan in 2013. By comparison, the CIA carried out about 360 drone strikes within Pakistan alone since 2004.

American UAV attacks on Pakistani terrorist targets have been one of the major issues of Pakistan's elections, which just (coincidentally or not) ended.

[Image: Flickr user Swamibu]

12:15 p.m., 05/29/2013

Why Military Drones Use Old Software

The Navy's MQ-4C drone, a sophisticated unmanned vehicle which can spend 24 hours in the air, runs on Windows XP. But that will change in a short period of time: The MQ-4C, which is still in testing, is switching to... Windows 7. sUAS News's Gary Mortimer reports that the drone, one of the Navy's latest, will be upgraded to Windows 7. Changing the operating system on the 47-foot-long unmanned aircraft will cost approximately $15.3 million, with work being done by Northrop Grumman.

So why is one of the world's most sophisticated aircraft using a operating system that's currently being phased out of the private sector... and using that to replace even older software? The answer has to do with the extended nature of military contracts and the intricate bureaucratic relationships between the Pentagon and contractors. Due to the long periods of time over which contracts and proposals are fulfilled, older versions of operating systems tend to be adopted by the government for years longer than they do in the private sector. The same goes for the military.

However, that is likely for the best--a military UAV running a newly released, still-somewhat buggy operating system isn't the most reassuring idea.

11:30 a.m., 05/24/2013

Mapping American Drone Strikes

There's a reason why maps and infographics are so popular in journalism--they make complicated, macro-level topics easy to understand. These topics include America's sprawling drone warfare program. Writing over in Co.Exist, Zak Stone analyzed a map of American drone strikes assembled by Bloomberg Businessweek. The map is described by Bloomberg Businessweek as the first ever "comprehensive compilation of all known lethal U.S. drone attacks."

To date, more than 3,000 deaths from the U.S. drone program worldwide are on the public record. This count does not include some attacks conducted in remote areas where they were no deaths. It is important to note that the casualty count includes both individuals targeted by the United States and civilians with the bad luck to be in the area.

The prevalence of drone warfare in American military thinking in the 21st century, along with the unclear oversight and parameters of military/intelligence UAV policy, are creating a sea of legal questions legislators are trying to untangle.

6:15 p.m., 05/23/2013

Obama's Drone Policy Speech

In a historic sea change, President Barack Obama redefined the United States' military drone policy and set new parameters for the use of unmanned armed aircraft abroad. Bluntly put, limits were placed on the country's sprawling drone warfare program. Responsibility for most drone attacks will shift from the CIA to the Pentagon, where a somewhat more public level of checks and balances will change the trajectory and timbre of America's drone warfare efforts.

However, one thing was surprising about Obama's remarks. For all of the CIA's massive role in American drone warfare, President Obama did not mention the intelligence agency at all. New York Times reporter Mark Mazetti, whose The Way of the Knife is the best book written yet on the domestic drone program, succinctly summed it up:

It is telling that the President did not mention the C.I.A. at all. It seems quite certain that past operations in Pakistan, Yemen and elsewhere are not going to be declassified anytime soon.

Also, moving operations from the C.I.A. to the Pentagon does not automatically mean that the strikes will be publicly discussed. The Pentagon is carrying out a secret drone program in Yemen right now, and it is very difficult to get information about those operations.

President Obama also floated the possibility of a "Drone Court" in his speech which would oversee legal questions related to America's military drone program in a secretive, Star Chamber-like manner. The text of the President's speech is available here.

[Image: Wikimedia user Calips]

Leet Cheat Sheet: What To Read On Tuesday, June 4, 2013

$
0
0

Who Is The Tesla Motors Of The Media Industry?

Some suggest that media is going the way of the American automobile. Matthew Ingram explains who's on cruise control, and who's bucking the motor trend.


Finding Good Ideas Through The "McDonald's Theory"

Creative block? Try Jason Jones's own intellectual Drano: Terrible Suggestions.


Why Are Developers Such Cheap Bastards?

Developers notoriously reject paying for necessary technology. In fact, many of them waste weeks writing their own, bug-riddled programs. But they will pay for services, like the cloud. What's the deal?


The Banality Of "Don't Be Evil"

Beneath Google's do-gooder facade lies something more akin to a Heart of Darkness. The tech giant got into bed with Washington, and now they're working together to implement the West's next-generation, imperialist status quo. But don't look, they're watching.


The Straight Dope On United States vs. Apple

The publishing houses have all reached settlements, but Apple's still on the hook. Here's a look at the core issues driving the government's case.


Everything You Know About Kickstarter Is Wrong

The crowd-funding site has never really been about technology, but new requirements make it even harder to raise money for gadgets. Artists aside, it's time to look elsewhere for cash.


A Real Plan To Fix Windows 8

Microsoft's "integrated" operating system never worked well for tablets or PCs. How InfoWorld aims to dissolve this unholy union and salvage what should be a healthy, digital relationship.


Keep reading to see curated long reads from previous days' news.


Why The Hell Does Clear For iOS Use iCloud Sync?

Milen explains why Clear and iCloud make natural bedfellows, and how they fell in with each other in the first place.


Here's What's Missing From iOS Now

FanGirls compiled a miscellaneous iOS wish list for all the good girls (and boys) to see. From Wi-Fi and Bluetooth to file systems and bugs, here are eight reasonable expectations for the future of iOS.


Startups, Growth, and the Rule of 72

David Lee uses Paul Graham's essay "Startup=Growth" as a jumping off point to explain the metrics of growth. And don't worry if you've lost your mathematical touch, he has too.


“Starbucks Of Weed,” Brought To You By An Ex-Microsoft Executive

Andy Cush explains how Diego Pellicer plans to become America's first real marijuana chain. They're looking for $10 million in investments to expand into three new states. They must be high . . .


SUPER-CHEAP 3D-PRINTER COULD SHIP THIS YEAR

Pirate 3D is bringing the revolution to your doorstep, and for a heck of a lot cheaper than their competitors. Their goal? Get these things out to kids and see what prints.


A Story About The Early Days Of Medium

How do you create Medium and change publishing forever? By first gaining audience with the man behind Twitter, duh. And a couple other Obvious ones . . .


Why Google Is Saying Adios To Some Of Its Most Ardent App Developers

Google is laying off its App developers in Argentina on account of a logistical banking nightmare. Really, it's just paperwork. In a related story, interest in Google's Internship remains underwhelming.


This Guy Screencaps Videos Of Malware At Work

Daniel White infects old hardware with contemporary viruses for educational purposes. But don't Worry, he's not contagious.


The Rise Of Amateurs Capturing Events

You've met Big Brother, now meet "Little Brother." How the same technological developments advancing institutional surveillance are ushering in a new era of civilian watchdogs.


Three Mistakes Web Designers Make Over And Over Again

Doomed to repeat ourselves? Not so fast. Nathan Kontny shares a short list of some things he thinks to avoid.


Not So Anonymous: Bitcoin Exchange Mt. Gox Tightens Identity Requirement

Can we see some identification? Mt. Gox announces new verification procedures in response to a recent money laundering investigation into one of its competitors. And they've got their own legal problems, too .


The Wall Street Journal Plans A Social Network

The Wall Street Journal is working to connect everyone invested in the Dow Jones on a more private, financial network with chat. Suddenly, Bloomberg's got some competition.


Tumblr Adds Sponsored Posts, And The Grumbles Begin

Users are responding poorly to Yahoo adding advertising to Tumblr. Can sponsored stories save the day?


Sci-Fi Short Story, Written As A Twitter Bug Report

Anonymous man's @timebot tweets from the future, past, and present at once. But what can we learn given Twitter's rate limits. The end is nigh.


Thoughts On Source Code Typography

Developers read code more than anyone. David Bryant Copeland argues aesthetic in addition to content, and the importance of typography and readability of source code.


Marco Arment Sells "The Magazine" To Its Editor

Glenn Fleishman to helm progressive Instapaper as early as Saturday. It's business as usual, but with podcasts.


Mary Meeker’s Internet Trends Report Is 117 Slides Long

The Kleiner Perkins Caufield & Byers partner will release her findings at the upcoming D11 conference. But you get a sneak peak...


Apple’s Block And Tackle Marketing Strategy

Tim Cook explained yesterday why there are a million different iPods, only one iPhone, and the importance of consumers' desires and needs. But will things be different after the WWDC?


Why Almost Everyone Gets It Wrong About BYOD

To Brian Katz, BYOD is "about ownership--nothing more and nothing less." Why allowing people use of their own devices increases the likelihood that they will use the device productively.


Remote Cameras Are Being Used To Enforce Hospital Hand-Washing

Ever wonder if your doctors' hands were clean? So did North Shore University Hospital. New technology sends live video of hospital employees' hand-washing habits . . . to India.


8 Ways To Target Readership For Your Blog

Blog functionality has increased considerably in the last 10 years, but has that overcomplicated things? Here's a list from Matt Gemmell (aka the Irate Scotsman) of ways to simplify. Your readers will thank you for it.


Pricing Your App In Three Tiers: The Challenges Of Channel Conflict

Cost- and value-based pricing may at first appear in contrast to each other, but they exist for different kinds of consumers. Tomasz Tunguz explains some solutions to justify your pricing model and maximize your profits.


How Google Is Building Internet In The Sky

Google is already using blimp and satellite technology to bring the Internet to the farthest reaches of the planet. What they really want is television's white space, but they've got a fight on their hands.


You Wrote Something Great. Now Where To Post?

The writer's landscape has changed. But with so many new options comes confusion. How do authors with something to say decide where, and to whom, they say it?


Yahoo’s Reinvention: Not Your Grandfather's Search Engine

CEO of Yahoo Marissa Mayer is bucking the minimalist trends she once championed at Google. Why the Internet portal may be making a comeback.


What Works On Twitter: How To Grow Your Following

Researchers at Georgia Tech University are working to shed light on one of the Internet's unsolved mysteries. Here are 14 statistically significant methods with which you can increase your presence on Twitter.


Financial Times Invents A Twitter Clone For News

With the launch of fastFT, Financial Times hopes to keep its readers closer than ever by providing a 100-250-word service for news. 8 journalists are now tasked with breaking the most important financial stories from all over the world.

[Image: Flickr user Uggboy]

Foursquare's Cofounder Is Going Public (With His Personal API)

$
0
0

On Sunday, Naveen Selvadurai woke up at 8:30. He exercised on the West Side Highway running path, and racked up 5,130 Nike Fuel Points over the course of the day. Then he shopped at the CB2 furniture store and the Apple Store in SoHo. His weight was 149 pounds, up two pounds from Saturday. (Hope he didn't overindulged when he got a late dinner at Peix Bar de Mariscos in NoLIta.)

Selvadurai's accounts haven't been hacked, and he isn't being tailed by the FBI. All of this data was made publicly available by Selvadurai himself. You'd expect a certain amount of oversharing from the cofounder of Foursquare, but he aspires to an even greater level of transparency: On Friday, he unveiled a "personal API" that shares data gleaned from Foursquare, all of his various fitness tracking devices, and even his bathroom scale.

"I’ve long been a follower of the quantified self," he writes. He's used FitBit, Jawbone up, Nike FuelBand, and Withings, which were all useful for deriving insights about his health. "It's shown me that seasons have a lot to do with weight makeup and that I seem to have a something like a 10-pound range in how much I weigh," he tells Fast Company.

Making all of this health data public seems like a natural next step to him. "We share our tweets and checkins and photos and music habits to a wide audience, so why not other types of behavior and habits as well?" he writes. But these fitness gadgets are all competing with each other, and they don't play well together. Selvadurai longed for something akin to the open APIs (application programming interfaces) that allow other sites to easily pull data from Twitter and Facebook. He claims his personal API yields "a dataset that is a single repository and view of my body as opposed to various silos of data scattered across different services and devices."

Anyone who wants TMI about Selvadurai's personal life just needs to visit api.naveen.com. Provide a Twitter handle, and you can get an access token that lets you pull the raw data on Selvadurai's sleep schedule, weight, number of steps taken, and locations visited. For example, his weigh-in from yesterday looks like this:

{"date": "2013-06-02T13:14:13", "id": "51ac3827d8949f001182cba3", "value": 67.95}

Not pretty to look at, but it'd be relatively simple for anyone with some coding knowledge to create a web page or app that presents the data in any way they like. And that's the point. "I’ve been curious what one might be able to do with such a resource," he writes. "Will any of it be useful for research? Might one create apps on top of me? Or perhaps draw insights that I haven’t yet been able to see myself?"

Selvadurai says that he isn't exactly sure what people will do with it. His experience at Foursquare showed him that people make use of public data in surprising ways. (Like the time someone turned his location info into a visualization of a round-the-world trip he took.) Maybe someone will take his personal API and produce a chart proving that his weight spikes every time he eats at Shake Shack (which he last visited on Friday evening). "My hope is that I'll learn something about my thinking or my data that I hadn't seen before," he says.

There's some speculation that Selvadurai, flush with cash from his Foursquare departure, may be thinking about his next startup. Is his personal API a prelude to a company that unifies and quantifies your health data?

"I just put it out there to start a conversation," he insists.

Google Glass: Best Of Both Worlds, At The Same Time, Always

$
0
0

To Luddites, Google Glass looks pretty much like glasses without lenses, but with a screen-thing over one eye that helps you surf the web inside your head. Lens-free glasses are cool because they make you like a 2011 barista or 2012 NBA star. But Google Glass is cooler still.

Soon, while I’m doing what I normally do in my super full life--hot air balloon rides, ice sculpting, trapeze work--Google Glass will allow me to do more things I love at the same time, like check flight statuses, discreetly review locker room footage, and host video conferences from the back of my best friend’s Vespa. And I’ll do it all with equal parts cyber-passion, robo-efficiency, and modern style.

Style is particularly important to me because I’m always . . . Alone with my things, and they demand a certain level of reverence. Luckily, Glass comes in five colors: Charcoal, Shale, Sky, Tangerine, and Cotton. Some may say Cotton is just white, but those are the same people who order a club soda in a restaurant and have no idea when the waiter brings them a seltzer.

I plan on buying Glass in all five colors, so I can decide which one to wear depending on my mood and the occasion. Shale if I’m feeling mysterious, and Tangerine, for like, when I go fruit picking with my family of avatars.

It’s a relief to know that our pilots will be able to keep one eye on their grandmas’ Facebook feeds while landing in storms. And trapped miners, in their dark caves, will never be bored again, for about four hours. This product gives us all the excitement of the digital world on top of the usually boring physical world--and better yet, it's always there. That liberates us from our newest prison, the slight inconvenience. It puts the appetizer, entrée, and dessert, all on the same plate, just like we didn’t know we always wanted.

Glass unshackles our hands from our smartphones, so we can grab life by the handlebars, while still keeping our brain one foot in an entirely other mental sphere. It frees our doughy bodies from these tyrannical chairs, which have kept us away from so many extreme sports, esoteric hobbies, and tender moments with loved ones that wouldn’t exist if unrecorded--all, while liberating us to accomplish mundane computer-related tasks simultaneously, like checking the weather in irrelevant places.

Google is omnipotent, omnipresent, and omni-fashion-forward. I'm proud to have outsourced my work to my array of gadgets and things, which then source it right back to me. Finally, we are one evolutionary step closer to reaching our full potential: Constantly sort-of working.

Brendan Flaherty is an L.A.-based writer with the tech know-how of a silverback. He has written for trade publications, marketing companies, and McSweeney's--and he is currently working on a novel that blends all that into a slurry of despair. Look for forthcoming pieces in The Los Angeles Review and the New York Observer. Follow him @BrendanFlaherty.

[Image: Flickr user Paul Hart]

Viewing all 36575 articles
Browse latest View live




Latest Images