Quantcast
Channel: Co.Labs
Viewing all 36575 articles
Browse latest View live

Why They Won: We Break Down The Finalists From Our Retail Accelerator

$
0
0

The Co.Labs and Target Retail Accelerator challenged entrants to design and build an app that would extend the Target customer experience into new areas, leveraging mobile software--native or web-based--to produce new and pro-social effects in their community, family, school, or social network. Our celebrity judges have selected their finalists, who each received $10,000 seed money and a Target mentor for the next stage--competing for a $75,000 buyout grand prize. Here we're breaking down each of the finalists: Keep your fingers crossed for them as they enter final judging this week at Target HQ in Minneapolis; we'll announce the grand prize winner on June 27th. Now, for the finalists.


Lookbook, By Team Citizen Made

How do you easily curate products in a flexible, dynamic way, across multiple use-cases? That's the problem Lookbook seeks to solve. Sometimes a shopping list is too rigid; people collect lists of products for all kinds of reasons. Whether it's for a wishlist, a color pallette, gift ideas, design inspiration or any other impulse, Target shoppers don't have an easy way of making loose collections of items which they can share and save easily in sets. Such is the problem this team chose to tackle. Team Citizen Made is comprised of Bryn McCoy, Rachel Brooks, and Karen Lee.


Divvy, By Team Pilot

Why would Target shoppers want Divvy? This social shopping list is attacking a nuanced problem: How to make group shopping with an app easier than without one. Real-life obstacles to group shopping--such as splitting the bill, getting a copy of the receipt, transaction history, earning rewards points, and so on--those are the territory of this beautifully-designed app. Furthermore, it solves what we will call the "Mint problem," which refers to the necessity to manually track items you buy inside personal finance apps like Mint, which only receive basic information about your purchase (outlet and price) from the credit card processor, making categorizing purchases an excercise in data entry. Such is the problem this team chose to tackle. Team Pilot is comprised of James Skidmore, Chris Kief, Christopher Reardon, Eric Kopicki, Steve White, Juuso Myllyrinne, and Charlton Roberts.


Target Adventure, By Team Ingenious

There is scarcely a more acute problem for parents than keeping their kids entertained while they go about their daily lives. Namely, in public places, where boredom sets in, and tantrums are soon to follow. There's rarely anything intrinsically fun about shopping when you're single-digit age, and most of the fun stuff in the store itself, you don't own, and therefore are not really allowed to touch. As a result, shopping with a kid can prove to be taxing or torture. Such is the problem this team chose to tackle. Team Ingenious is comprised of Florence Ng, Sheena Yang, and Jesse Pinuelas.


A/B, By David Chu

The problem that this app, A/B, solves is a subtle one: How do you quickly get friends' opinions on your purchases, and aggregate their feedback in some way that helps you make an informed buying decision? And more importantly, how do you delimit your friends' feedback to only the items you're considering? Collecting opinions from friends is one of the most valuable ways to make informed decisions, but limiting the scope of the conversation can be difficult. Let's say you email a friend asking them for an opinion on a new bike; you're likely to get a reply that contains not just an opinion on the bike ("that bike is great, but...") but also a bunch of other second-guesses and suggestions--have you seen the new public transit line that just opened? Have you considered a cruiser? Do you really need that many speeds? How about a fixed gear? The conversation needs limits. Such is the problem that David Chu chose to tackle.


TargetShare, By Team TargetShare

There were several submissions that solved problems around shopping lists, but TargetShare took a unique tack by focusing on the problems around food shopping. Namely, that food shopping requires too much friction to digitize the process easily. First, you have to decide what you'll make; itemize the recipes for that day or week; produce a shopping list, and then (presumably) enter it into some kind of app. TargetShare removes all these steps, adding zero overhead to the food shopping process and saving a bunch of data entry in the process. Such is the problem this team chose to tackle. Team TargetShare is comprised of Jinal Dalal, Ashutosh Pardeshi, and Vallbhi Parikh.


Target Cares, By Team HYS3

Target does a lot of philanthropy work, but according to these entrants, customers lack a way to contribute, and their mobile device is the natural place to bridge the gap between shoppers and social initiatives. Such is the problem this team chose to tackle. Team HYS3 is comprised of Siyuan Tu, Sangmi Park, Haihong Wang, Shelley Leung, and Yuan Gu.


Matisse, By Team Matisse

This project starts with a little-known data-point: Students who study art are four times as likely to be recognized for academic achievement, according to the Education Fund. Yet when schools cut budgets, art and music programs are always the first to see the chopping block. With little recourse beyond petitioning local government, there is little an individual family can do to help sustain art education in their community--a solvable problem, considering that the major source of overhead for art programs is supplies, which could conceivably be sourced via donations if channels existed. Such is the problem this team chose to tackle. Team Matisse is comprised of Jed Wood, Antonio Garcia, and Maris Grossman.


Stay tuned-in to theCo.Labs and Target Retail Accelerator to find out who wins the $75,000 grand prize on June 27. --The Editors

[Image: Flickr user Daniel Oines]


Breaking Down The iBooks Suit: Was Steve Jobs a Criminal Colluder?

$
0
0

Two Sides to the Charges

Apple is accused of conspiring with publishers to raise e-book prices: "a collective effort to destroy Amazon’s model of selling e-books for a uniform $9.99," according to Justice Department lawyers.

At the time of the alleged Apple conspiracy, many of us were much more concerned about Amazon's increasing monopoly power in book retailing. This market dominance gave Amazon the leverage to set e-book prices at $9.99.

Amazon's apparent abuse of power or Apple's alleged price-fixing conspiracy--which side in this seeming conundrum has the moral upper hand?

From the legal point of view, there are no formal charges against Amazon, so it's not an issue. Yet this Walrus remembers feeling the weight of Amazon pressuring publishers to play their game. Let me give you an anecdotal example.

The large, international publisher I was working for received a new agreement from Amazon with terms more favorable to Amazon. There were no contract negotiations. From what I was told, Amazon would sooner drop our entire catalog of thousands of books than negotiate terms. Amazon had the upper hand, because they pretty much knew that they were our largest single source of revenue.
We heard from friends at other publishers who were being treated the same way. We were all being forced to accept contracts that favored Amazon. But this was not collusion, not a conspiracy, far from it! It felt like an abuse of monopoly power.

The Case Against Apple


If Apple had said to each of the six publishing behemoths, do it our way or go it alone, that would have been similar to the Amazon tactics I observed. But it seems that Apple was creating a group to act together. Sounds like collusion, but were the intentions of this group illegal?

One could make the point that Apple would do anything to break Amazon's near monopoly in e-book retailing, and it's hard to discount this possibility, entirely. But motivation and business objectives are not the same, and Apple's objectives had more to do with changing the retail model for e-book selling.

Think about the differences between Amazon's model for selling goods online and Apple's iTunes model, originally created to sell tunes for 99¢ each. Amazon created a virtual marketplace for physical goods. Apple's virtual marketplace exists primarily for making file transfers across the Internet.

As much as anything, it's this fundamentally different view of the retailing world that gave rise to the federal lawsuit.


Two Ways to Sell Books Online

There are two basic business models for retail book sales, the Discount Model and the Agency Model. One needs to understand how these differ and the historical precedents of their existence in order to judge Apple's motives. Here goes!

The Discount Model is the standard for retail trade book sales. It's simple:

Every book has a list price and it's usually printed right on the cover by the publisher. The publisher sells box-loads of books to distributors at a discount, usually around 50%. So the wholesale price for a $20 book is $10.

Basically, the distributor pays the publisher the wholesale price and is free to sell the books for any price they like, from full-markup to loss-leader.

The Agency Model is so-called, because the retailer acts as a selling agent for the publisher, which is difficult to manage with physical books but works well for web-based transactions. This is the model of Apple's iTunes Store and it works like this:

The publisher (or record company or video company) sets the price of the item, which is posted by the seller. The seller collects all money for transactions and tracks it. Now here's the big difference. The seller pays the publisher a percentage of every sale, pocketing what remains.

In the case of the iTunes Store, Apple set a standard 70/30 split, with the larger share paid to the publisher. In earlier days, Apple's 30% went almost entirely to cover overhead.

The iPod's success turned iTunes into a modest profit center and gave Apple the kind of market dominance in music that Amazon now enjoys with book and e-book selling.


Expanding Empires & Border Wars

You can almost feel the inevitability of conflict between the multiple colossuses of the digital world. It's like reading about the period of European history immediately preceding the outbreak of WWI. (How much more civilized to fight with lawyers instead of soldiers!)
Coverage of the Apple price-fixing case, as it's popularly called, makes it seem as though Apple is the aggressor bent on hegemony, while Amazon, the protector of lower consumer-friendly prices, is the innocent victim.

In fact, the government's suit has nothing to do with Amazon, at least not directly. The U.S. Justice Department is protecting us, the consumers, from an attempt by monster conspirators to manipulate the market, raise prices, and increase profits.


Boiling Down the Essentials

For the case itself, it doesn't matter that Amazon is behaving like a friend of the court, nor that Apple's six alleged co-conspirators have settled with the Justice Department. Apple CEO, Tim Cook, says there's nothing to settle, because they've done nothing wrong.
Neither does it matter that data on e-book pricing is pretty much inconclusive. It's nearly impossible to answer the simplest questions, like are average e-book prices going up or down, and if so, why? Or more importantly, I doubt we'll ever be able to say that consumers will be better off if Apple loses this case.

What matters is Apple's intent. Was the motivation greed and increased profits, and were the publishers in collusion to raise prices together towards this end? A lot of people believe it looks this way.

The problem I have with this conclusion is that Apple's behavior is consistent with the way iTunes has been set up and run since its inception. Also, I much prefer the Agency Model supported by iTunes. As a business model, it's much cleaner and more straightforward than the Discount Model, which is more easily subverted if one party is overwhelmingly more powerful in any given transaction.

As consumers, it's hard to say which model favors lower prices, but the lowest price is not always the best or fairest price! Think software pricing, but that's a topic for another day.

Resources:
The New York Times Bits Blog is following the story closely.

[Stack Of Books: R.Martens via Shutterstock]

Human Editors Are Returning To Music

$
0
0

Looking back, it seems Pandora may have been the tipping point, the proof of concept that swayed people and subliminally convinced them that curation could be done by robots and algorithms just as well as it could be by humans. The service did surprisingly well at picking tracks similar to what you suggested it start with, without being too predictable. Credited with taking more and more jobs, it seemed inevitable that machines would also pick our music, TV, movies, and reading content on their own. A clever algorithm coded to be fluid and ever evolving software should be the best recommendation solution, right? The current trends, however, would suggest a constant human element present is needed in making media discoverable across all types of consumer content.

The biggest music store on the planet, iTunes, uses both automated means and human power to suggest new content to users. In the form of "Genius," the store provides suggestions based on your past listening habits in addition to other factors. While the results are fine, usually recommending things people do, or would like, it doesn’t provide the same wow factor as a human emotionally involved can. The same feeling you get from a person saying “I’ve enjoyed this music, I think you will, too.” Possibly the reason Genius results are halfway down the page and often overlooked. The highlighted and featured albums in the genre sections--which are handpicked by Apple employees--do often give that wow factor, even breaking notable artist The Boxer Rebellion based on the iTunes feature.

Also gaining more attention as a music discovery destination, NPR handpicks its features and selections shown and heard across a variety of blogs and radio shows. Amy Schriefer, who’s in charge of "First Listen" and live events, was quoted by the Wall Street Journal saying "We want to create the feeling you used to get in record stores when somebody would hand you something and say, 'You have to hear this.'" Similarly, the streaming music service Rdio heavily uses social to introduce users to new music through the same "record store feeling." When you glance over to the side bar on Rdio.com and see one of your friends listening to something you haven’t heard of before, it appeals to your curiosity more than if you knew it was a computer-generated recommendation. In this way, music is following written content, which has come to thrive more and more on social sharing and distribution and less on robot aggregators and portals.

What Music Can Learn From Publishing And Film

Communities built around high-quality writing content like Medium heavily rely on the human factor to sift through mounds of articles and essays. The site tackles the problem of suggesting what visitors should read in several different ways. Readers don’t just "Like" articles--they recommend them. Those recommendations get tallied in the writer’s personal analytics page, which also tracks whether a piece of writing was fully read or just skimmed. The process in this case may be automated, but the emphasis is placed on people actually spending time engaging with the whole piece of writing. While clever software helps make the process easier, ultimately Medium has decided high-quality curated writing, commissioned even, was important enough to hire a director of content and is hiring more in-house editors.

Curating the news seems like a job for a computer rather than a human as an effort to cut costs in a market that has been devoured by new media. In the case of Circa, a fairly new iOS app, the company actually uses paid editors to pick out news, summarize it, and deliver it through the app. The company thinks this way of human-selected and summarized content makes sense for a mobile generation who read in small chunks of time in-between other tasks. Rather than having too many stories to choose between before actually getting to reading. Considering the growth the app has seen lately and the recent hire of Reuters social media editor Anthony De Rosa to head Circa as its editor-in-chief, it would appear human involvement is key to ventures with a heavy curation aspect.

Bucking the trend, Netflix has famously relied on using its computers to spit out suggestions for those holding the remote, desperately looking for the next thing to watch. Not many other companies, after all, are in a position to offer up a million dollars in prize money for writing the code that the service uses to provide movie suggestions. It’s debatable whether a company and operation at this scale, though, might not have any other option other than to employ computers to do the recommending over humans. Fanhattan thinks otherwise and is trying to sort through Netflix--as well as other video content services--and provide curated choices, getting rid of the "what to watch" problem.

The answer, then, is a careful hybrid. Even a few years ago, it might have seemed like we were headed towards a world where the most sophisticated recommendation engines would be picking the majority of what people watched, read, and listened to. Today, we aren’t without the clever and sophisticated software--we just won’t rely on it the way we thought we would. It seems the world still needs editors after all.


Tyler Hayes contributes to Hypebot.com and does interviews for NoiseTrade's blog. He often writes about music and the impact tech is having on that industry, which can often be found on his personal blog, Liisten.com. Tyler also runs the site Next Big Thing, which ranks user-submitted links for an interesting hub of music-related content.

[Image: Flickr user D. Sinclair Terrasidius]

Finally, Three Ways To Automate iOS App Testing

$
0
0

Agile development has long been all the rage; indeed, in most modern development shops the great agile methodologies are old hat. If you come from a software background like Ruby on Rails, Python, or certain Java niches, you may--until recently--have experienced a small jolt of culture shock when encountering the deep obstacles that agile development practices faced on the iOS platform.

The lengthy iOS app approval and manual user update process runs against the grain of frequent delivery. IDE support for key agile coding practices like refactoring is soft. And more glaring than anything else: The dearth of great tools and established practices for automated testing has made the agile ideal of true test-driven development (TDD) hard to attain.

All this is changing, though, and rapidly. New tool sets are making it easier and easier to engage in genuine agile development on iOS. In particular, true test-driven development--which was formerly a hard, upstream slog on iOS--is becoming increasingly attainable.

This article outlines our experience using TDD to build HowAboutWe Dating for iPad and iPhone. We'll describe the stack of tools we use for testing and continuous integration and how we use them to speed the delivery of quality software.

When we made automated tests a requirement for completing a feature or bug-fix ticket, our QA churn dropped radically; our crash instances plummeted; developer confidence improved because we saw the risk of making changes go down; and we could better predict our release readiness without emergency feature cuts.

Our most important tools are Kiwi for unit testing (what Xcode calls "logic tests") our model and controller logic; KIF for integration testing of user-facing behavior; and CruiseControl.rb for continuous integration to keep us honest. We also have some key practices that guide our use of these tools.

Tool Number One: Kiwi for Unit Testing

If you've ever used RSpec, you're familiar with the likes of:

View on GitHub

Allen Ding's Kiwi is a testing framework for iOS with an RSpec-inspired syntax. It makes slick use of Objective-C blocks and lends itself to readable, contextualized tests.

View on GitHub

Kiwi is a very complete framework, with many of the levers and knobs you'd reach for regularly in RSpec, like:

  • Nestable contexts
  • Blocks to call before and after each or all specs in the context
  • A rich set of expectations
  • Mocks and stubs
  • Asynchronous testing

In addition, Kiwi is built on top of OCUnit, which means that it integrates seamlessly with Xcode logic tests and that you can reuse your old OCUnit tests, if you want to do a whole-hog migration to Kiwi. We prefer Kiwi to raw OCUnit, mainly for the elegant syntax--the nested blocks are easy to scan, and the specs are about as smooth to write as you could hope for in Objective-C.

We Use Kiwi.

With our models (most of which are subclassed from NSManagedObject), we test all the code not generated for us. This includes parsing JSON from our API into Objective-C instances; all model-level internal logic, such as converting a user's gender and orientation into a complementary set of genders and orientations to search for; and important inter-model interactions, as between messages and message threads.

Helpers and categories are another place where Kiwi and TDD shine. We've test-driven a set of CGRect helper functions that aid us in smart photo cropping; a photo cache; and a category of time- and sanity-saving methods on NSLayoutConstraint.

We've also been driving toward thinning out our view controllers, and a lot of that involves factoring complex code into separate, single-responsibility objects. An example: In our app's messaging module, we offer an Inbox, a Sent messages folder, and an Archive folder. The three boxes have different behaviors (e.g., you can only archive a thread from the Inbox), and an earlier revision of the messaging view controller had a lot of if-inbox-then-do-X-else-if-sent-do-Y-else, plus a lot of code to make sure the correct message folder was loaded and visible, that Sent and Inbox were properly synced but sorted slightly differently, different empty state strings were displayed for each folder, etc.

Fat controllers and repeated if-else chains are both code smells, and we used Kiwi tests to drive out a single solution to both of them: a separate MessageStore object that handled the juggling of messages and threads. The messaging view controller tells the MessageStore when the user switches modes and queries the MessageStore for the contents of the current folder, appropriate loading and empty strings, and for yes/no answers to behavioral questions like, "Should I expose an Archive button?" The controller is slimmed and the chained if-else-if statements are replaced by data structures that will be easily extensible if we decide to add a fourth folder.

Kiwi specs were integral to building the MessageStore with minimalism and correctness. To give you a taste, here are two specs that cover message-archiving behavior:

View on GitHub

This first spec tells us that if the MessageStore is uninitialized, it should throw an exception when asked whether archiving behavior should be exposed; otherwise, it should give a boolean answer appropriate to its current mode. If the user requests that a thread be archived, the MessageStore handles that, as defined in the second spec:

View on GitHub

This spec sets up an Inbox containing message objects (here represented by some dummy NSNumber objects--the MessageStore does not actually care about the type of the objects it is holding) and mimics various requests to pull an object from the Inbox and insert it into a particular place in the archive folder. For modes where the user should not be allowed to archive messages (as defined in the previous spec) or when invalid indices in the Inbox or archive collections are specified, an exception should be thrown; otherwise, the appropriate object should change folders.

The full spec is about 250 LoC, and canonical red-green-refactor TDD drove out an implementation of about 200 LoC. Visible, facile metrics like this scare some people off TDD, because they just see the cost of more code; I see this and know that I've written and tested a well-specified, tight bundle of logic, and I took one of the flakier, harder-to-maintain pieces of our app and broke it into solid, loosely-coupled modules that work reliably. The test-driven MessageStore and the concomitant simplification of the messaging view controller purged a whole class of hard-to-diagnose bugs from our issue tracker. When it comes to stabilizing the most-used parts of your app, 250 lines of straightforward, declarative test code is cheap.

One limitation of Kiwi is that it's not so good for testing UIKit-derived classes or anything that touches them. This is actually a limitation of Xcode logic tests--they don't fire up a UIApplication instance and don't play nicely with UIKit. To test elements of the project that can't be separated from the UI, we use automated integration tests.

Tool Number Two: KIF for Integration Testing

Kiwi helps us keep our lovely abstractions lovely, but what of the user-facing parts of the app? And how do we know that all the pieces work together? For integration tests of user-visible behavior, we use Square's KIF library. It uses the iOS accessibility framework to simulate user interaction with the app.

Testing every facet of the app by automatically driving the app through every possible user action would be insanely costly, and it would rapidly get to the point of diminishing returns. In addition, the fact that the tests run in the simulator by faking user behavior means that the tests run at human-ish speeds, not as fast as the CPU can run through them. Integration testing in the sim requires a number of additional practices and judgment calls to make it sane and valuable.

First, the tests have to be decoupled from the outside world. We've used method swizzling to stub out all our network calls and give back dynamically generated, predictable data to drive the app. To wit:

View on GitHub

Each of those stub methods returns a simulated API response, based on responses recorded in an actual session. This keeps us from having to stand up a web server to test the client app, and puts the inputs to the tests entirely under our control. We frequently have the stubs respond to different inputs by returning different data or exceptions so that we can simulate behaviors like paging data, network failure modes, etc.

Second, the integration tests have to be decoupled from each other. If you run 50 integration tests one after the other and make a change to the fourth test that alters the app's state in a persistent way, you risk breaking the next 46 tests. To mitigate this risk, we bundle the tests into related modules and run steps to log the test user out and clear the database between modules. Where it's important that an intermodule dependency be tested (e.g., a message sent from a user profile should show up in the logged-in user's Sent folder), we write a test for it, but otherwise we try to keep the KIF test scenarios limited to one screen or a small set of related screens, each testing a limited but meaningful set of user behavior.

Third, a lot of judgment needs to be exercised in what gets tested. It is impossible to test every possible user input, but you want to hit all your major error states as well as at least one valid input. It is impossible to test every path through the code, but you want to reasonably simulate the things a user is likely to do and spend a little more effort on the parts of the code that matter most to the user experience.

If you've read about the pros and cons of integration testing, you've heard some version of the issues I've described above (coupling-induced fragility, impossibility of total coverage, etc.) as reasons why integration tests are a bad thing. Certainly, we've found them costlier to write and maintain than unit tests. The way we've applied them, though, has given us far too much value to even consider discarding them: Where we've used unit tests to write quality code, the integration tests have been invaluable in helping us maintain it. They form our "regression firewall", and if the test board is green, then the developers, product managers, and QA all know that none of the big stuff has gone wrong. Bugs still get through, but there will tend to be stuff around the edges.

In the rare case something big gets through, we add it to the suite. It happened recently that a release made it into the wild with a 100% reproducible crash when the subscription screen was reached by a certain path. We translated the repro into a set of KIF steps:

View on GitHub

From this short repro, you can get the feel of KIF tests: Check the state of the screen (mostly via the accessibilityLabel and accessibilityValue properties of screen elements), interact with screen elements, check the state, interact, and so on.

Our crash occurred on the last line of the repro above. We added the steps to the suite, ran it, watched it crash; then we fixed the bug, ran the test, watched it pass; then we ran the rest of the upgrade-related test module to make sure we didn't break anything. This is a much longer process than just diving in and fixing the bug as soon as you've diagnosed it, but it boosts confidence among everyone who builds or inspects the app in two ways: They trust that we haven't broken existing functionality (thanks to existing tests), and they trust that the bug being addressed in the new test won't return.

One KIF trick that comes in handy is defining our own one-off steps. Any parameterizable process that gets used more than once gets a factory method in our own category on the KIFTestStep class, but sometimes the code is made more comprehensible when a task that only happens once is defined inline with a block:

View on GitHub

The Cloud Inside the Silver Lining

There are two major downsides to KIF. The first is the syntax--all those addStep: calls are actually building the test suite, not running it, and there's no clean way to set a break point at a particular instance of a step (unless you've defined the step yourself). We tolerate it because KIF is the best thing we've found for this type of testing. We feel it has yet to achieve true maturity, and we've extended it quite a bit for our own purposes, but it largely does what it says it will and has served as one of the pillars of our testing strategy.

The other pain point is the run time of the tests. Our full suite takes nearly 15 minutes to run, which makes it useless for fast-iterating styles of TDD/BDD. Our usual method of handling this goes something like:

  • Run only the test scenario related to the feature or bug being addressed.
  • Once the central test passes, run related test modules to make sure nothing was broken.
  • In cases where confidence is low or a change is far-reaching, run the full suite at the developer bench. Otherwise, merge your change (after review: See below) and be ready to jump back on it if the CI board (again: See below) goes red.

This is another one of those times when developer judgment plays a key role. Running the full suite is a major break in your rhythm, especially when you're making (what feels like) a small change. The value of moving on to the next thing needs to be weighed against the risk inherent in the change being made, and the evaluation of that risk depends on one's intimacy with the code and seasoning as a programmer. In the section on practices below, I go into how we buttress individual judgment with the collective wisdom of the team.

Tool Number Three: CruiseControl.rb for Continuous Integration

CruiseControl.rb is a darling of the Rails community, but it's not just for Rails apps. It's quick to set up--including on a Mac, which is required to run our Xcode-based tests--and can run and extract results from arbitrary build-and-test scripts. CC.rb handles polling our GitHub repositories for changes to projects; our custom scripts do the rest, and CC.rb reports red/green for each project by checking standard Unix return values from the scripts.

First the common library shared by our iPhone and iPad projects gets built, and its Kiwi tests are run:

View on GitHub

It's that simple; the result of the final grep for Kiwi's success message is the success or failure that CruiseControl.rb reports.

Running the KIF tests for the iPhone and iPad projects is a bit more of a production. We have to do extra steps to build the common library prior to the main project, and we have to use Waxsim (we prefer Jonathan Penn's excellent fork) to run the simulator from the command line and capture console output, and sift through that console output for success or failure messages. The end result is the same, though: The return value of the script is reported as the test outcome by CC.rb.

Continuous Integration, of course, is only as good as the speed with which it gives you feedback. We have our CI set up on a Mac Mini running Screen Sharing enabled, and with the CC.rb dashboard exposed on a convenient port. CruiseControl.rb can be set up to send email, but our inboxes are plenty cluttered already. We get the results through two main channels: On the developer desktop, CCMenu keeps the latest test results an eye flick away:

To broadcast status to management and the wider team, we keep CruiseControl.rb's web dashboard on a large screen mounted on a wall overlooking the developers' corner of the shop:

The public display of results is an important driver of good testing habits. As soon as a team gets used to meeting high expectations for test reliability, a red test suite on display for all to see becomes a distracting irritant. While we don't generally suggest introducing distracting irritants into technical workflows, in this case the irritation is confluent with the worthy goal of maintaining a reliable test suite that consistently inspires confidence in everyone who builds or depends on the software.

Our Best Practices: The Rules to Guide the Tools

Tools are only valuable when they are used well. We surround our tools with processes to get the most out of them, and tune those processes as we go in response to real-world feedback. Here are three simple rules that guide our use of the tools described above:

A: TDD

The core rhythm of TDD (and its cousins like BDD) is often described as "red-green-refactor":

  1. Red: You write a test describing the behavior you want, run it, and watch it fail.
  2. Green: You nudge your code into a state where the test passes.
  3. Refactor: You inspect your code (and your tests) for duplication and other issues, and remove them. The tests keep you from breaking anything.

The last step is probably the least-understood, most-skipped one in the process. People carry a lot of weird, fuzzy definitions in their heads for the word refactor, often having to do with larger re-architecture of code. In the TDD context, it has a very specific meaning: Refactoring is changing code without changing behavior. An important sidebar to this is that you have to verify the constancy of the behavior, or you're not really refactoring--you're just changing stuff. Automated tests are one (relatively cheap) way to do this--you can have confidence that the behavior being tested hasn't changed.

Skipping the refactoring step is a sure path to technical debt. The refactoring step is doing the dishes after dinner, pouring water on the ashes of your campfire, cleaning your rifle after you've fired it. After you've added or altered code, any duplicated bits should be factored out into methods (and tests run again); any ugly or slow bits should be reworked (and tests run again); any dead or commented code should be removed . . . You get the picture. Professional software development isn't a game of seeing how quickly we can deliver working code; there is the cost of future change to consider, too, and leaving your code clean makes life better for the next person who touches it (and that will as often as not be you).

All that to say: The automated tests don't increase your product quality. They provide the support and confidence for you to apply your skill and judgment toward improving the quality of your code.

B: Tests Should Always Be Green.

Tests do not inspire confidence when they're failing. Above, I mentioned the social incentive built into making our tests results public, but the practical value is important, too. The tests exist to convince us that we haven't broken anything. The moment you allow yourself to get comfortable with broken or inconsistent tests, you've lost sight of why you built them to begin with. There may be situations where you allow them to be red for a short time, even a few days during a large re-architecture, but these should be extreme situations, and you should not get comfortable with them.

We foster a culture of personal responsibility around the tests. If your name is on the failing commit, it's yours to fix. In the event that the committer can't address it right then, the next person with a free hand is responsible for investigating the issue.

C: 2 > 1

A minimum of two people must look at every piece of our code before it gets merged into the master branch. We have two ways to meet this requirement: pair programming and GitHub pull requests. In both cases, the goal is to have a second active collaborator devoting attention to the problem, to overcome the individual tendency to code to the happy path, and to bring different skills and perspectives to the question of the best way to implement something.

In the case of pull requests, the programmer inspecting the code is expected to run both automated and manual tests, as well as applying a critical eye to the code. With both practices (pairing and pull requests), both parties are expected to be active collaborators; no one should be rubber-stamping someone else's decisions.

Conclusion

iOS culture, even in many large organizations with skilled engineers, is behind on up-to-date testing practices. Aggressive mobile strategies up against lengthy App Store release cycles and manual user app updates create pressure to jettison code and best practices that might be seen as "extras."

It’s ironic that iOS development--the catalyst of the consumer web explosion of the past few years--has been a reluctant late comer to TDD, perhaps the most cherished methodology of the agile web development culture that is building the consumer Internet.

Over the platform's short history, agile methodology and TDD have been at odds in iOS development culture. The agile desire for speed has taken precedence over other concerns due to a past dearth of high-powered automated testing frameworks, and the results have often been high crash rates, long QA cycles, and a whole series of tribulations that the modern developer associates with the antiquities of waterfall development.

Our experience building HowAboutWe Dating for iPhone and iPad has shown that TDD and CI on iOS are well worth the effort. The tools are young but rapidly maturing. It is possible! Our move to a genuine culture of TDD on iOS has transformed the quality of our software and how quickly and predictably we can deliver it. So we're believers that any organization not already employing these practices should dive in and measure the results for themselves.


HowAboutWe is the modern love company and has launched a series of products designed to help people fall in love and stay in love. Aaron Schildkrout is co-founder and co-CEO of HowAboutWe, where he runs product.

Brad Heintz is the Lead iOS Developer at HowAboutWe. When he's not bringing people better love through TDD, he's tinkering, painting, or playing the Chapman Stick.

James Paolantonio is a mobile engineer at HowAboutWe, specializing in iOS applications. He has been developing mobile apps since the launch of the first iPhone SDK. Besides coding, James enjoys watching sports, going to the beach, and scuba diving.

[Image: Flickr user Greg Westfall]

Why Frustrated Apple Developers Need AltWWDC

$
0
0

In 2010, tickets to WWDC sold out in eight days. In 2011, tickets sold out in under 12 hours. 2012? Two hours. And this year when the $1,599 tickets went on sale on April 25th, they sold out in just two minutes.

The five-day event, which begins on June 10, is a boon to developers who want to not only have a front-row seat for the unveiling of the future of iOS and OS X, but who want to connect with Apple engineers and learn the latest in hands-on labs. Unfortunately, the realities of physical space and the number of Apple engineers (and, let’s be honest, the cost) prevents many developers who want to attend from going.

We’re tracking developer news from Apple’s official WWDC here.

That’s where AltWWDC comes in. As the name would suggest, the event is the alternative WWDC that is free and open to all who want to attend. Like the actual WWDC, AltWWDC runs June 10-14 and takes place just blocks from the Moscone Center. It offers talks, labs, and the chance to network with other devs. The event is the collaborative brainchild of Josh Michaels, Judy Chen, Kyle Kinkade, and Rob Elkin.

“We originally started AltWWDC last year,” Rob Elkin, one of the event’s organizers explains. “I spent some time in Amsterdam in March of 2011 when another conference was happening, mdevcon. I was there for a week, but never ended up getting a ticket for the conference, and basically worked from the Appsterdam co-working space, where I got to catch up with old friends and meet new ones who had come to the city for the conference. When I went home, it was about a month before the WWDC tickets were to go on sale, and I had to make a decision on if I wanted to go or not. I wanted to be in the city for the conference, but with Apple releasing videos of the talks a week later, I didn't see a compelling reason for me to go to WWDC. With a desire to be in the city, in need of a place to work, and with WWDC tickets set to sell out in record time, I figured the logical thing to do was to put on a co-working space.”

Elkin then contacted Judy Chen, COO of the non-profit Appsterdam, which brings together developers of all platforms from all over the world to work together and learn from each other. Then, Elkin says, “It quickly went from being a co-working space to having speakers, lunch, and snacks. And that was just year one. We've got way more planned this year.”

“It's no longer possible for Apple to accommodate demand for WWDC tickets, as there are too many developers and not enough engineers on hand within Apple,” Victor Agreda, Jr., editor-in-chief of The Unofficial Apple Weblog and one of this year’s AltWWDC’s speakers, tells me. “WWDC has always been taxing -- employees must pause projects to participate for a week in San Francisco. AltWWDC provides an overflow solution, even though it is outside Apple's purview. Developers like to get together and share tips, advice, and more. There's an esprit de corps around events like this, and it's only natural that side events would pick up the overflow demand for community around Apple's biggest live event of the year.”

And it’s that “esprit de corps” that makes AltWWDC so compelling. It wasn’t created to compete with the real WWDC nor to protest against it. It was created out of what I like to think makes up the pneuma of a good developer: a desire not only to learn, but to share their knowledge with others.

“We've seen a massive outpouring of support from the community for the event, and that is the real reason we love putting this on. It's a chance to give something back to an amazing bunch of people, have a good time, and maybe learn something,” Elkin explains. “You could kind of sum it up as leading by example. We are showing people how awesome the community can be when people work together. We couldn't do it without the help we've been getting from sponsors, volunteers, and the people that want to come. Everyone plays a small part in helping us put this thing together, and it is going to be way more than the sum of its parts.

AltWWDC runs June 10-14 at the San Francisco State University Downtown Campus at 835 Market St. on the 6th Floor. Stay tuned as we’ll have more coverage of the event as it happens.

The Web Is Getting Better So Fast That Apple's Brand New OSes Look Broken

$
0
0

The Mac and iOS developer community is perhaps the most brilliant single group of technologists in the business. Which is why it's so damn painful to watch web development run circles around what used to be the most advanced platform on the planet. Here are seven ways that Apple fails to do its developers justice.

People expect updates faster than Apple can deliver.

The explosion of high-quality web apps is changing users (and developers') expectations for how software evolves. In the early days of Facebook (which I'll use as an example of a prevalent web app) users were outraged when the layout and functionality changed. Now Facebook alters the look and function of dozens of features every month, and not only have users learned to roll with the punches, but they've come to expect this as a natural part of improving the Facebook product. The idea that Apple has to install more code on your phone through a software update just to extend its abilities is becoming tiresome.

Want examples? Look at this excellent laundry list of iOS, Mac OS, and iCloud complaints from FanGirls. Almost all of these complaints will have to wait for an OS update to be remedied, and even a minor OS update is a major hassle in iOS and Mac OS X, causing users to stop what they're doing, download the update, and leave their device functionless for a little while while it fixes itself. The web has made us realize that these updates should be pushed in the background without hassling the user. Sure, this isn't Apple's fault--the nature of C languages is that you have to add code to add functionality. But there are ways around that: Read on.

Apple can't run experiments, so features rarely evolve.

Because Apple can't push remote updates, they can't do A/B testing the way web developers can. As a result, features take months or years of internal testing to improve, where some data-driven testing would probably speed the process exponentially. What's the big deal, you ask? Well, at least one company--Philadelphia-based Artisan Software--is working on a framework that lets iOS developers push changes to their native apps without software updates. The ramifications of that will be huge as Artisan (and presumably, some competitors) bring native Mac/iOS app development cycles up to web-like velocity. The OS itself, and Apple's own apps, will start to seem dinosaur slow when the third-party apps on their own platform are blowing by them in project velocity.

An ancillary problem is automated testing, something that web developers have benefitted from for years now. Mac and iOS developers are just figuring out ways to hack together automated testing, as this tutorial from Brooklyn-based startup HowAboutWe demonstrates.

API and SDK updates are tied to software product updates and launches.

There's no reason to couple the release of developer tools and endpoints with consumer-facing products. Developer tools need to evolve faster, yet Apple has gotten in the habit of tying improvements in its platform along with product events. This is frustrating, but especially so for developers who have worked on the web, where discovering and fixing a problem in the platform often happens the same day the problem is reported. As Gus Mueller reports on his blog:

This was the very first thing that set off alarms in my head when Apple introduced the iCloud APIs. It's baked into Foundation. Not "at the foundation of the OS" marketing talk, but the APIs are exposed to us via Foundation.h. This is a very, very bad code smell. Things on the web and in sync land move fast, and if we have to wait 12-18 months for something to be fixed or updated, why in the world would a developer adopt it? Oh, you say it's all fixed on 10.9? That's great. How about all those users who aren't updating to that OS for whatever reason? Oh, they are screwed? Great. That's what I wanted to hear... .Mac syncing sucked- but it had a downloadable SDK and it was the sane way to do things. You could download an update which fixed bugs and it was possible for Apple to add new APIs without having to do it in sync with the OS. Just because an idea had a bad implementation doesn't mean the idea is wrong.

Apple rarely uses only its documented APIs.

Sure, Facebook sort of blew it when it tried to make an OS--even forgetting the app dock until last week. But you can't blame them, because developing a product in secret makes testing that much harder; you have to really know how to dogfood your product correctly. But one thing that Facebook deserves credit for, and which future OS-makers will no doubt be expected to replicate, is its reliance on its own well-documented APIs to build its products. Gus Mueller again, on his blog:

When Apple's consumer apps use the exact same APIs we are given, it might be time to seriously look into using iCloud for doc syncing. When iCloud was introduced, Pages on iOS certainly wasn't using the same APIs Apple gave us for syncing. Why not? I guess because it was broken and/or unreliable. Something third party developers had to find out on their own.

The file system is completely screwed up.

The stuff you store on Macs and iOS devices today are hopelessly fragmented across apps, local drives, and tiny, expensive cloud repositories. From FanGirls again:

Okay this is a huge debate with the geek crowd. That crew is heavy into the idea that iOS needs to have a 'Finder'. But I disagree. I don't think that users need to see into the core system in such a way. But what is needed is a single system where all data lives. Doesn't mean that every app can use everything. But at least have common file types in a place where every app that can use it has a single copy to play with. Say like PDFs. Sucks that if I want to see it in iBooks, PDF pen and GoodReader that means I have 3 copies of the file filling up space. And then there's that clunky 'Save to'/'Open from' file sharing system.

Some Apple apps suck, leaving a void in the platform.

Apple has chosen to make software products for certain applications in their OSes, which causes third-party developers to eschew competition there--particularly where Apple is using undocumented APIs and has an unfair advantage. As a result, the Apple apps which suck (and there are a few) leave a gaping hole in the platform where innovation should be happening. One example is Contacts, which could use a huge organizational and social-networking overhaul--but you have to have guts (looking at you, Brewster) to compete here, so a lot of developers go where they know Apple most likely won't--categories like Games, which is arguably the only one where Apple acts like a trustworthy platform. As Ben Thompson says on his blog:

An app can afford to be prescriptive about the user experience and means of interaction; in fact, the best apps have a point of view on how the user ought to use their service. Platforms, on the other hand, are just that: a stage for actors (i.e., apps) of the user’s choosing to create a wholly unique experience that is particular for every individual user.


Apple needs to figure out where it's a platform and where it's an app developer, draw the line, and rarely (if ever) move it. Otherwise, developers will always live in fear of solving problems for users which are too close to Apple's interests, and where they might be shut down or trod over by a new Apple product.

Apple ignores small-time developers.

What used to be a scrappy event for a tight-knit community has become an exclusive gala event. The glad-handing might be enough to satisfy prominent developers, but the rest of us are stuck needing to form our own unofficial WWDC just to feel included. Surely Apple could find a way to make their tent a little bigger. We're tracking AltWWDC with interest--it's a free and open Apple developer conference happening this week in San Francisco, concurrent with the official event.

[Image: Flickr user Boston Public Library]

How Proper "Dogfooding" Might Have Saved Facebook Home

$
0
0

One of the biggest misconceptions in the world of programming is that "eating your own dogfood" is a magic cure-all. Sure, developers should use their own apps, it brings the team up to speed and gives an organization context with which to judge the app's worthiness. But dogfooding is no substitute for user-based design and testing.

Dogfooding is particularly effective in two development scenarios. But it can also be a trap. Here are a few scenarios in which almost any dev team can benefit substantially from eating their own dogfood, and an example of how it can go wrong.

geek-and-poke.com

Three Scenarios Where Dogfooding Rules

  1. When you have to learn a new platform. You can't build a great app experience for a platform that you’re unfamiliar with. Are you mainly an Android gal or an iPhone guy developing an app for a Microsoft phone? To do your job well, you need to become a real user of the target platform. You need a minimum of four weeks with that platform as your primary platform, and much longer for bigger or more heavily used platforms. This is true not just for developers, but for managers, designers, QA testers, and the like. Anyone who is part of the creative process, development, or decision-making team for the app had better be fluent with the platform.

  2. When you're testing pre-release software. Eating your own dogfood is vital when you're working on experimental features, which carry the greatest risk of developing major problems or breaking another part of your app. Your dev team will need to catch problems long before the public gets their grubby little mitts on it--not sit back and wait for the complaints to roll in. Remember, if no one in your company uses your app on a given platform, the folks finding bugs in the wild will be customers, and those customers might be justifiably upset. And let’s face it: No matter which app store you're selling through, reviews have the power to totally sink even the best piece of software. You want your app to be as close to 5-star-worth as possible before anyone with the power to review it gets a look.

  3. When you want to create accountability. Dogfooding gives you a way to solicit immediate and brutally honest feedback that would be almost impossible to obtain anywhere else. Focus groups? People are too consensus-driven. User diaries? People are too kind, knowing that you and your team probably worked hard on these features. If you want the truth, then you want coworkers to give you shit if you’re doing something stupid or just not useful. Fail early. Fail often. But fail internally.

All that said, however, it’s important to remember that dogfooding is not a replacement for traditional (or non-traditional) user testing. You still need good feedback loops with real customers outside your company so you don't succumb to hive-mind.

When Dogfooding Goes Wrong: Facebook Home

It’s neither kind nor entirely fair for the industry to pick on Facebook Home any more than it already has; the potential of Home is yet unrealized, but that doesn't mean it will always be so. Still, Home's tepid rates of adoption hold important lessons about how dogfooding does and doesn't work.

For one thing, dogfooding a psuedo-OS layer like Home would take much, much longer than for an app, and it seems Facebook didn't respect this formidable adjustment period. This is doubly true in the instance where employees are building such a complex layer over an already-foreign, heavily segmented platform. According to TechCrunch, Facebook has been "Droidfooding" since late last year, encouraging employees to switch to Android so that a larger proportion of the engineers have a firsthand grasp of the overall Android experience. (In fact, the company went so far as to install various pieces of propaganda around the Facebook campus highlighting Android's explosive growth rate as an impetus to start using it.) But the fact remains: Facebook is an iOS-dominant shop. Says TechCrunch:

While the default choice for what phone employees got used to be an iPhone, a Facebook spokesperson tells me that now “We don’t encourage one device over another. We let employees choose.” When I asked what the breakdown of iOS to Android users is in the company, Facebook’s spokesperson admitted, “I don’t have a ratio but with the early focus on our iPhone app and the multi-year cycle of carrier contracts we do have more iPhones deployed.”

Android's heavily segmented device pool would mean that users aren't true platform natives until they've owned more than one device on a platform, over the course of several post-paid contract periods. Using several Android phones gives one an idea of the range of usability, and provides an opportunity to see how each OEM handles core tasks and design issues slightly differently. Rather than "Droidfood" for two-to-four years, Facebook only did so for about six months before launching Home in May. Adoption of the Home app has been tepid, and the HTC First, the flagship device for Facebook Home, was almost immediately discontinued due to poor sales.

Another problem with Home dogfooding: Secrecy. Now that Facebook is a public company, leaks about black-box projects can have very real impacts on their market cap, which was likely the impetus for majority-internal testing. In our informal tests, Facebook Home has been fairly stable, which tells me they found enough internal users to squash any technical bugs that showed up--this obviously wasn't a failure of testing writ large. In fact, we know from this FastCo.Labs interview with Facebook UX Researcher Marco De Sa that Facebook tested extensively with about 60 individuals--a huge group for in-depth UX testing by most companies' measure--but with few exceptions, all the testers were Facebook employees.

The problem is presumably that Facebook employees love Facebook, and, well, love is blind. No doubt these employees are heavier-than-average users, possibly even by mandate. Normal people love Facebook, sure, but they also want their weather and calendar and other critical data. It's possible the Home team didn’t realize that a device’s “home screen” experience shouldn’t focus on one app, but should straddle all commonly used apps and interactions. In our interview with De Sa, we asked about this very issue. Here was his reply:

Co.Labs: So all these studies were with Facebook employees? Is it always a good idea to test on your most expert users?

De Sa: Oh, we definitely try to test with external users as much as we can and for Facebook Home, at the beginning of the study, we actually showed some external users; but for some of these interactions Facebook employees are users just like any other. Trying to see how they use gestures and things like that, or how they learn or discover new gestures is something that you can actually do with Facebook employees as you would do with another user. Of course, Facebook employees seem to be more proficient with using Facebook, but again these interactions aren't necessarily something that is Facebook dependent. They're just navigating content and things like that.

Co.Labs: Is there really adequate diversity of opinion and experience within the Facebook corporation for testing something that will be used by people all over?

De Sa: We try to talk to Facebook employees who were not involved in design, were not engineers, we try to get a broad sample of people with different levels of expertise, recent employees, older employees. So, we try to recruit considering all of those differences. Yes, we're designing for a billion people.

Lack of a large-scale beta program also meant that employees weren't able to evaluate Home of their own volition. John Gruber says it well:

There is a dogfooding lesson here, though. Does Mark Zuckerberg carry an HTC First, or any other Android phone with Facebook Home installed? Does Mike Matas? (Doesn’t look like it, judging by the “via Twitter for iPhone” metadata on his recent tweets.) Why not?... Turn Facebook Home into an interface that Facebook designers and engineers want to use, not merely feel obligated to use, and then they’ll have something. But if it remains something that even Facebook’s own designers and engineers do not prefer over the iPhone (or stock Android, or any other platform), if it remains something that the company needs propaganda posters to promote even among its own employees, then Facebook Home will remain what it is now. A dud.

Is it any wonder they didn’t anticipate the lukewarm reception from everyday users, and backlash from Android fans?

[Image: Flickr user Derek Gavey]

Four Android App Design Guidelines You Should Break

$
0
0

While Google’s Android Developer Guidelines do help developers create better apps, there are a number of places where usability and common sense should win out over standards. Of course, before you set about breaking the rules, you'll need to learn them, so by all means: Don't proceed until you've familiarized yourself with Google’s standards. They are important and useful. Now here are four scenarios where you can totally disregard them.


Google Design Standards You Should Probably Break

1) The app logo in the Action bar. Disregard!

If your app looks like lots of other generic apps, then putting your logo on the far left of the main Action bar (as per the Google standard) will give users much-needed context. However, most major Android apps do not do this. Why? They have a visual brand that’s strong enough--and a user interface that’s unique enough--to already provide the visual clues users need to know to make the connection.

Removing the logo has huge advantages on a screen as small as a smartphone's--that screen real estate is better used providing contextual information or navigation controls for users. Below, Google follows their own guidelines and puts the Gmail logo on the top left, but Facebook uses the action bar to provide context and navigation.

2) Unlabeled icons are okay. Disregard!

In their guidelines, Google shows a Gmail example for use of icons in the action bar. But their example is inadequate because it shows unlabeled controls--which are really only the privilege of highly recognizable apps. For lesser-known app developers (which is to say, literally everyone but the Gmail maker) putting unlabeled controls in your app just confuses users. People don’t know what your little “filing cabinet” icon is supposed to do in the context of your app. A simple word under each of these buttons greatly improves usability and especially the first-time user experience.

Icon ambiguity, even in a technology as familiar as a mail program (which tends to use very standard icons), reminds us that creating good icons can be hard. Don’t beat yourself up trying to come up with icons for complex or specialized actions which don’t already have a well-known icon. Instead, either put text under your icon, or just use a text button. Icons are great tools for usability when they’re exactly that--iconic--but otherwise, they’re just confusing.

3) The navigation drawer is a natural choice. Disregard!

The drawer is a handy navigation solution for apps that have many categories or top-level actions. I would caution developers not to use this as an excuse for having too much in the navigation. Great apps are focused. In fact, some of the most powerful and useful apps in the app store get by with just two to five navigation items and no slide-out drawer. Before you run off and implement it, honestly ask yourself if you can get away without it.

4) You should use non-obvious interface elements.

Just because something is available in the OS doesn’t make it a good idea to use it. Case in point: Don’t rely on long touches, the hardware menu button, or nested menus. Instead, always have some sort of link on screen for users to get to the data they want. Also, stay away from using multi-pane layouts since they are often unattractive, visually confusing, and clunky to use.

An app with more panes than can fit on screen creates what’s known in the biz as “mystery meat navigation.” This sort of layout lacks the visual clues needed by users to know they must perform an action to see more information. If you have more categories or tabs than you can fit on the screen, consider getting rid of some or using a slide-out drawer instead. Here's an example of what not to do:


What's Missing From The Google Android Design Guidelines

Keep a visual separation between buttons.

Text buttons should be visually separated from other icons with a line. While this is a subtle touch, it is effective from a UI point of view. Tumblr (top) has a short vertical line separating their text from the rest of their action bar which is visually clearer and more grounded than the Yahoo Mail App (at bottom).

Use pull to refresh (the right way).

In many applications, pull to refresh is just a better (and more elegant) way to refresh than a refresh button, and users just get it. You should consider using it instead of, or in addition to, a refresh button--but only if you don't need the real estate above your table cells for something else. In some apps, this may be the most logical place for a search field, filter options, or metadata. Obviously, some apps require more frequent refreshes than others--if the app doesn't need to pull in data at least once per session, then consider using the space for something more heavily used.

Consider whether you need an on-screen back button.

If you’re developing an app for an Android device without a hardware back button (such as a Kindle) you absolutely need to have a back button on screen or your app could become unusable. In fact, unless your audience is comprised of only highly technical users, consider having this button on screen at all times. Many less experienced users are confused by Android’s default “back” behavior. So, the more options you give them to go back the better.

It should also become a best practice to make your on-screen “back” button take your users back only one screen, instead of all the way back to your main menu as the Google guidelines suggest. Why dump people out of your app when they're just trying to complete a different task in your app?

It doesn't matter if your icon has rounded corners.

When I speak at a conference, or am consulting with a client, I’m often asked, “Should app icons have rounded corners or square corners for Android?” My answer? Let it go. It doesn’t matter. Maybe in a year or two then we’ll see a clear winner start to dictate Android app design conventions for the rest of us, but right now, 95% of your users really don’t expect one over the other. Make the rest of these choices yourself--but on this one, I suggest using it as an opportunity to defer to the HIPPO (Highest Paid Person’s Opinion).

Screenshot images from Google's design guidelines.

[Image: Flickr user Eric]


How I Learned To Build Products People Care About

$
0
0

On August 23rd, 2011, I was sitting on the concrete in the Y Combinator parking lot, trying to find some space to be alone and call my wife. It was Demo Day, and the crowd was full of elite and celebrity investors. We even had a private chat with Ashton Kutcher. But there I was, calling my wife to tell her how terrible this day had been. It was a startup's worst fear, realized: Nobody cared what we were working on.

Now, almost two years later, I'm building Draft, a new way to help people write better. It has things people need, like version control, professional editing, importing/exporting, transcription tools, and social analytics--and the response from users has been incredible. Here's what I learned in the intervening two years that led me to a project that people care about.


Early in 2011, my first business, Inkling, had matured enough that I could make some large bets with my time. I decided to create a new project. I was fascinated with online gaming, and wanted to see if I could build a business helping companies advertise through games. So my partner and I built a neat gaming platform which a business could customize with their own images. For example: Instead of Bejeweled with jewels, images of Gap sweaters could appear in a version of the game sponsored by the Gap.

Within a few months, we had 10,000 people playing these branded games for over two hours a day. As far as the games went, it was a success. But when we talked with advertisers about solving a problem with their marketing using in-game ads, the conversations were painful--we couldn't articulate how this method of marketing was any better than what they were currently doing.

It took us a few months, but finally we decided we didn't have a future with what we were building. We had focused on cool software first, and problems second. And now we were paying the penalty.


After a few more fails at figuring out what I could do with the remnants of this company, I decided to take a six-month break and work on the technology team of the Obama re-election campaign. It was nice just being told what to build for once. But the time also offered me a chance to pound back into my brain the importance of starting your product with a problem.

I stumbled upon this book, Something Really New: Three Simple Steps to Creating Truly Innovative Products, by Denis J. Hauptly, which sums it up very nicely. In order to create a truly innovative product:

  1. Study the tasks people use a product for.
  2. Turn those tasks into a series of steps the person follows to get the task done.
  3. Finally, start eliminating steps. That's it. Innovative products eliminate the friction of doing a task.

When the election was over, I forced myself to look at life through that lens. And that's when I thought of Draft.


During the last 18 months, I've been blogging a lot on the SVBTLE blog network. It's important to me. And it's at least a weekly task to get a new post published to NinjasAndRobots. So I explored the steps I took to publish a post.

One thing that stood out was version control. I'd write a really terrible first draft, then save a copy into Evernote. Then I'd edit ruthlessly. Version 2 into Evernote. I'd have a dozen copy and pasted versions in Evernote before I was ready to hit publish. There had to be a better way.

I was tempted to use Git, a very popular version control system for software developers. But as much as a I love using Git for software, the steps to commit and push a new version of a blog post with Git was still too much friction.

So I created something to help me do simple version control. I could create a plain-text (or Markdown) document. Have it auto-save. But when I wanted to mark a major a new version, one click, and I was done.

And I could see how my document changed over time.

That's how Draft started. And then I began exploring more tasks I had.

I'd send a blog post to my wife for her feedback. But instead of just emailing me back edits, she'd paste the work into Microsoft Word to track changes. And now I have to use Word to convert her feedback into my blog post. To make this task easier, I removed a bunch of those steps. This is Draft's screen to accept changes my wife made on a document. No more emailing Word documents:

I kept finding tasks I had as a writer where I could remove steps. That's the experiment you see at Draft. My attempt at making myself a better writer, by making the process easier. Today, Draft is evolving to help people write better using simple version control and collaboration. It's far too early to call Draft a "success," but so far I'm insanely grateful and blown away to have a lot of people enjoying the product. It's been a bumpy road getting here.

Here's one of my favorite examples of the impact Draft has had. I created a Chrome extension. All it does is make the process easier of getting the text of a Draft document into a text field you're writing to on the web. So if you're writing a blog comment, and you want to use Draft, you can use the Chrome extension to open Draft, write your blog comment, and then click a shortcut to switch back to the comment with the text filled in. It saves a few keystrokes. No big deal. I even hesitated releasing it. Will anyone really care?

But the response has been amazing:

For just "saving a few keystrokes."

Draft continues to give me evidence that innovation by focusing on removing steps in tasks is a great way to build a product.


Is success or failure brought by chance or through one's own actions? Do we owe it to luck or intuition? I think a great deal of where I'm at with Draft is about luck.

I got lucky a writer with a lot of clout took interest in my project--interest that started because he once worked on a similar project seven years prior, so he understood the challenges I was addressing.

I was lucky to hire a guy at my previous company, who introduced me to a new friend. That friend randomly recommended Draft to his friend, who in turn wrote a very affectionate article on Draft that helped bring me a lot of new attention.

I was lucky Ashton didn't care about branded games that day. If he had, I might not have shifted my focus.

And I'm insanely lucky to have parents who taught me to persevere when I fell on my face. I've done that a lot getting here. But see, that's the thing about luck. It has a funny way of showing up, when you keep showing up.

[Building Blocks: Titov Dmitriy via Shutterstock]

The Kind Of App Experiment You’d Only Find In NYC

$
0
0

Life in New York can be a lot like the life of one of those trees that attempts to grow into a cliff. Unlike the Bay Area, with its downy-soft incubators, people here invent a lot of things based purely on creative desperation. One such early-stage web experiment is this: “Help Jeremy Find An Apartment.”

To anyone who has YouTubed “lean product,” this should look familiar: It’s an extremely simple way to test a concept. The idea here is that paid referral businesses--not some fancy listing aggregator like Trulia or Airbnb--is the way to find a place to stay.

Here’s how it works:

My landlord is raising my rent. So I'm looking for a new apartment: a studio within a 45-minute public-transport commute of Rockefeller Center, for less than $1,400/month, ideally available July 1. I'm checking out Craigslist and StreetEasy. But I figure that, somewhere out there, a friend of a friend probably knows of a much better apartment than I'd ever find online. So I'd like to try an experiment:

  • I'll give you $60 if you point me to a great apartment that I eventually rent.
  • I'll give you $30 if you point me to someone who points me to that apartment.
  • And I'll give you $10 if you point me to someone who points me to someone who points to that apartment.
  • Or the charity of your choosing.

Jeremy Singer-Vine, the creator of this little app, is a reporter and computer programmer at the Wall Street Journal. (The apartment bounty project is a personal side project.) He’s also a co-organizer of Hacks/Hackers NYC. Here’s how (and why) he built it:

It's a really simple Python/Flask/Postgres app running on Heroku's free tier. I mostly built it over the weekend while, yes, becoming a little desperate in my apartment search. I've lived in 4 apartments in my 3.5 years in New York, and was really hoping that this might be the first year without a move. Alas.

If it actually works, it might be worth expanding into a web app anyone can use. But for now, SInger-Vine says, he’s keeping it as barebones as possible.

I didn't add any sort of login/user system, in part because that would have been more work, in part because I loathe unnecessary signups, and in part because I wanted to make it as easy to use as possible. I also wanted to make sure that people could remain as anonymous as they'd like. Somewhat relatedly, the Flask-SSLify module was super handy in making sure that connections to the site go over HTTPS.

As New York’s technology scene grows, along with other fast-growing spots like L.A., we’re going to see all kinds of experiments that solve those cities’ most acute problems, but which also have wider relevance. If the core concept works for apartments, it might work for anything, which is always a promising sign in an Internet tool--it indicates you could scale it to enormous sizes. Singer-Vine says he’s particularly interested in how it could work for information:

And though I built the site specifically for my apartment search, I'd been thinking about this sort of referral system for a few years now -- though more in the journalism/reporting context. (Rather than pay sources, you might give them formal thanks at the end of a piece, in the way Slate's Explainer column does.)

He says he’s tinkering with a more generalized version of the referral app now. Follow him on Twitter handle @jsvine for more updates, or on GitHub.

[Image: Flickr user Horia Varlan]

I'm Beating The NSA To The Punch By Spying On Myself

$
0
0

Last week, we learned that the NSA has been secretly collecting billions of phone records from major U.S. providers and mining the data, ostensibly to look for terrorists and other threats to national security. To justify these programs, the government is pointing to the fact that they don't collect the contents of these calls and text messages, just "metadata," and that to associate this data with real people, they need a warrant.

Here's the catch: there appears to be nothing that says the government can't use full, non-anonymous datasets to mine this metadata for pure gold. We've been covering data science in business at Co.Labs, but if you need a refresher, here's how basic data-mining typically works: you take a set of data that contains examples of the types of patterns you're looking for, and use it to train a computer to look for similar patterns in another set of data.

These techniques are now so widespread that performing simple data-mining on an individual level is becoming much easier, thanks to numerous prediction libraries available in just about any programming language and powerful cloud-based tools like Google's Prediction API. To understand exactly what the government can do with this metadata, I decided to beat the NSA at its game by spying on my own data.

Unfortunately, getting access to it proved to be difficult. Ironically, although they will willingly hand all of it over to the government, according to this support thread my cellphone provider, Verizon, does not allow you to export call data and only provides customers with 30 days of logs online. Luckily for me, I've been using Google Voice for most of my calls and text messages for the last three years, and they provide a data export service called Google Takeout that includes everything the government has except device serial numbers and location.

The Takeout data wasn't perfectly formatted for data-mining, so I wrote a quick and delightfully inefficient Ruby script that creates a CSV with the data that I'll be sharing on GitHub shortly. Once I had the data in a workable format, the question became what to look for. My life probably isn't all that interesting to the NSA (I hope, anyway), so trying to discover signs of terrorism in my data seems unlikely. Plus, I'm new to data-mining, and my friends who aren't convinced me to start simply. I decided to ask a basic question: can a computer tell, based only on the time of day and duration of a call, whether a given caller is male or female?

I randomly chose 20 phone numbers from my metadata, looked up the gender of the owners of those numbers, and marked all of their records as male or female. Then, I fed that set of 861 test examples to the Google Prediction API, and waited. As I'm sure any true data scientists reading this article are screaming right now, there are numerous caveats to be made here. First, 861 test examples is a very small sample, and isn't likely to produce a good result. Moreover, I'm only looking at a couple of variables, meaning any patterns it finds won't be very strong. Second, because I only have my data to play with and my call patterns are unique to me, any results I get from this experiment will probably only apply to me. Finally, randomly picking 20 numbers is a bad way to choose a sample population.

Nonetheless, when Google's API was done training my model, it reported that it could predict the gender of a caller with 67% confidence in the result. That's a bad confidence for any production model (only 17% better than guessing), but testing it on other calls in my history and even friends', I found it surprisingly good at determining a caller's gender. Moreover, we don't know what the NSA considers good enough to seek out a warrant, but the evidence suggests the threshold is fairly low: According to leaker Edward Snowden, an analyst at the NSA only needs to be 51% confident that their target isn't a U.S. citizen.

Most importantly, if that's what I can do with a limited set of my own data, imagine what the NSA can do with the datasets it has access to. If you don't think determining an anonymous caller's gender is particularly useful, think about the other things you might find out from a better set of data and more precise algorithms, like which callers are likely to be related to one another (I'm going to try that one on myself next), or with location data, where they're likely to be at any given time. Once you start combining these questions and running these algorithms on multiple people's sets of data, you start to see how you can build up a fairly complete picture of just about anyone's life without truly knowing anything about them at all.

What's next for this experiment? I'd love to hear your suggestions on what to look for in my own data. I'll also be cleaning up my scripts and posting them to GitHub so that others can mine their Google Voice data, and I'm working on exposing my models to the public online so that anyone can plug in their data. In the mean time, let me know if you have any suggestions on Twitter.


Hacking The Newsroom: What Is This All About?

Co.Labs is an experimental publishing division, and we're letting you into the kitchen. This post is tracking (another one of our experiments) all of our internal experiments, from big ideas that may turn into long-term projects, to simple editorial tricks we're using internally.


Previous Experiments

In mid-April, we went live with a half dozen articles which we call "stubs." The idea here is to plant a flag in a story right away with a short post--a "stub"--and then build the article as the story develops over time, rather than just cranking out short, discrete posts every time something new breaks. One of our writers refers to this aptly as a "slow live blog." Here's what we learned.


To celebrate Pac-Man's birthday, Fast Company developer Harry Guillermo built his own version using JavaScript. Just for fun, Fast Company's CTO, Matt Mankins, used Guillermo's work to build a social version of Pac-Man.


You have two choices: You can build an app that might save a day of work for everyone in your company but might also end up wasting more of your time than it saves. Or, you can or spend 3 hours doing mindless manual tasks and make everyone else who touches your work miserable. We chose to spend the time building a quick Chrome extension that helped save nearly a day of our judges' time on the Target/CoLabs Retail Accelerator.


If you visit our home page, you’ll see a reverse-chronological list of our latest stories. It’s useful if you’re one of our core readers who knows who we are and wants to read every article. If you’re not one of those people (and the chances are pretty good you’re not), you may benefit more from something like the new Co.Labs Back Page.

[Image: Flickr user Ralphbijker]

Google Reader’s Death Will Vastly Improve The Way You Read

$
0
0

June 13, 2013

Prepare to see reading apps improve like never before.

There’s no shortage of feed-reading solutions either underway or already on the marketplace. Each has its devotees, but last-minute Google Reader refugees will be looking for a long-familiar blend of cross-platform support, ease-of-use and simplicity. Thanks to Google’s failure to pursue a business model for Reader, those users will likely cringe at the thought of paying to keep their news feed addictions satiated.

We’ve been told that Digg is working on its own replacement, but it has yet to surface. Flipboard was quick to pounce on the opportunity and started ingesting users’ Google Reader feeds into its wildly popular mobile reading app. When I asked where Reader users are going via Twitter, I got mostly expected responses, save for one: Feedbin.me was recommended by technology writer Jon Mitchell, who said, “Since I use it behind the same client, my reading experience hasn’t changed.” Feedbin.me indeed plugs into Reeder for iOS and a host of other popular apps. But there’s a catch: It costs $2 per month.

Will people pay to read their RSS feeds? “Not after Google gave it away for free,” Mitchell added. “The onus is on client developers to expand people’s minds about what reading is like.” Expect some radically new reading UIs to emerge as news aggregator apps attempt to prove their value.

Feedly, a cleanly designed client for RSS feed readers, is generally considered one of the most promising replacements, and it’s the one I’m now using. It bears a striking resemblance to Google Reader’s core layout and functionality, but layers on enough of its own UX elegance and extra features to feel like something new. The Feedly team started pushing their own native feed management engine--codenamed “Normandy”--into production several weeks back and recently managed to finish it off without disrupting their users one bit.


What This Story Is Tracking

It’s been three months since Google ruffled many a feather by announcing the demise of its RSS feed-reading service. After the wave of outrage from Google Reader’s most passionate users subsided, the Internet quietly went back to its normal business as RSS’s most dedicated devotees started looking for a replacement. Eyeing an opportunity in the void left in the giant’s wake, a handful of companies and developers began working on their own Google Reader substitutes. So where do things stand?

Doomsday is almost here. In a few weeks, Google Reader will no longer be accessible to its longtime users, who are routinely reminded to backup their data. Where everybody going? Which developers will most effectively rise to the occasion? We’re reviving this news tracking story to document the final days of Google Reader and give readers a crystal clear picture of what’s on the horizon for RSS and online reading.


Read on for previous updates


March 26, 2013

What’s RSS Worth, Anyway?

Marco Arment explains how RSS keeps homegrown blogs alive, and why you’re probably using it wrong. And the increasingly likely possibility of losing FeedBurner loomed large at CSS Tricks, where they explain why it’s poised to get shuttered by Google, and what FeedBurner users can do to jump ship. In the spirit of piling on Google, The Financial Brand blog points out that Google Alerts is another broken, forgotten product that many people still love.


March 20, 2013

Which Google Project Holds The Smoking Gun?

People have blamed Google Plus for outmoding Reader in the eyes of the company brass. Agonizing Reader fans further, Google rather abruptly announces Keep, an Evernote copycat which syncs with a new Notes app that will come stock on Android phones.


March 14 - 19, 2013

Alternatives

After coming to terms with losing Reader, the focus of the conversation shifts to alternatives. Just one day after the announcement, social sharing platform Digg says it is building a reader app. Popular Google Reader client Feedly similarly assures users that they had long prepared for Reader's demise and had already built a new backend for users that does not rely on Google. Slate runs an interactive graphic of Google's product graveyard.

Et Tu, Google?

One of the most widespread criticisms of Google's announcement was the lack of transparency about the reasoning for shutting down the service. Ex-Microsoft Windows Chief Steve Sinofksy criticizes the announcement for citing usage statistics without putting them in context. Robert Evatt of the Tulsa Worldties the Reader discontinuation to rumors that Google is planning to launch a newsstand service to compete with Apple.


March 13, 2013

Reader Gets The Axe

Citing "a loyal following" but noting that "over the years usage has declined," Google announces the closure of Reader in a post to its official blog. Early reactions are immediate and mostly negative, although some high-profile figures such as RSS pioneer Dave Winer write that the withdrawal of the market leader will lead to new opportunities in what was a mostly static space. CNet catalogs five good Reader alternatives.

[Image: Flickr user Kevin Dooley]

Is Rubber Cement Seriously The NSA’s Anti-Thumbdrive Strategy?

$
0
0

In November 2008, the NSA experienced a major network security breach at the hands of a miscreant with a thumbdrive. The deputy secretary of defense at the time, William Lynn III, responded by having all the computers on NSA bases collected and their ports sealed with rubber cement. That’s according to the New Yorker, which goes on to say:

Lynn termed it a “wakeup call” and a “turning point in U.S. cyber defense strategy.” He compared the present moment to the day in 1939 when President Franklin D. Roosevelt got a letter from Albert Einstein about the possibility of atomic warfare.

First of all, rubber cement? Are you guys serious? There are ways to disable USB ports that are simpler and more scaleable. Two seconds of Googling and: Here’s a help doc about how to prevent users from connecting a USB storage device using Microsoft Windows Group policy. Let's hope they're at least using stronger rubber cement than the stuff that brought down a $4.6 million dollar drone in February this year, when a tacked-on chip came unglued.

Further, this "atomic age" analogy is too binary to capture the actual risk here. The atom bomb marked a major paradigm shift in defense. But advances in computing spur paradigm shifts in government and business practically every year. This week Intel showed off a new Thunderbolt thumb drive that can transfer data at speeds of around 10 Gbps, or about double that of USB 3.0. Let's assume that NSA computers are of an earlier vintage than most consumer machines and run USB 2.0--and the latest version of USB is about 10x faster than 2.0.

That means this new thumb drive can, most likely, move data about 20 times as fast as the one that Edward Snowden used in Hawaii, and probably with much greater capacity. (Then again, Snowden apparently "studied computing" in college, so perhaps he was toting a newer high-capacity drive for geek cred; then again, he is said to have never completed the coursework.)

Today, most cheap thumb drives hold around 16GB, or about one-eighth the capacity of Intel's new drive. And Intel just announced Thunderbolt 2, which will double the speed of today's Thunderbolt ports.

In another six to nine months, the sheer data capacity available to a leaker will increase hugely--but the speed at which they can steal stuff will go up by a factor of about 40. That means that stealing large files--map tiles, video, high-res imagery, audio recordings, and massive document dumps--will be even easier.

If the NSA plans to keep its operation airtight in that kind of consumer technology environment, it will need a lot more rubber cement.

[Image: Flickr user Robootb]

Why The Boom In Enterprise Tech Will Happen In New York

$
0
0

A year ago at this time, as I was preparing to be thrown out of my Greenpoint loft for Airbnb'ing too much, I began to realize what can happen when you let things get too complicated.

People like to talk about simplicity, the type you associate with Apple products or fine cuisine--luxurious, elegant, unadorned. But there's another kind of simplicity--the back-against-the-wall sort, where you're forced to slash and burn, leaving behind the things you thought you needed to be shoved off to the side of a hallway. In that way, bootstrapping a software product is a lot like getting evicted. Simplicity isn't so much a design choice as a point of survival.

I'm writing the draft of this article in a real-time web app called Writebot: A simple, web-based productivity tool that two friends and I have designed and built, and for the past six months, tested here at FastCo.Labs. It's used at about a dozen publishers, banks, and hedge funds here in New York. It is not the prettiest--the entire visual design is grayscale--and it doesn't integrate with Facebook or Twitter. It's simply a web-based repository that lets you quickly ball up a bunch of raw materials--files, links, notes, chats--into a single page and start a project. It looks like this.

It's almost shameful how long it took us, me and my partners--Ananth Muniyappa and Jon James--to arrive at something this constrained. And much of what I learned designing Writebot last year got poured into FastCo.Labs this spring. This site is about exactly this process--teaching yourself how to think about software. To that end, here's what we learned about simplicity while building Writebot--and why New York is the best (and scariest) place to solve big business problems with software.

Start with a shamefully small part of the problem.

When I drew the first wireframe for Writebot, it was in 15 minutes--after literally thousands of other wireframes and specs of much more ambitious tools. As a journalist and consultant, I had seen first-hand how messy most companies' internal workflows were. With a couple of iOS development books under my belt, I thought I could design a mobile app that would totally streamline the way people managed projects. Ha.

It took almost a year of research, YouTube learning, burning cash, and frying brain cells before I got humble enough to just draw a simple web app that would just save me, and people like me, a precious few minutes. The first drawing looked like this:

The task flow above: Search for news, drag relevant links into a "bucket," and take some notes on what you've collected. Then share that page via a single URL. That's it.

The more I read (and write) the stories of products' early days, the clearer it becomes that even the most complex systems and products begin with something that is almost shamefully reductive. They grow with the people that use them. You can't plan out any of it on paper. The metaphor of a product "road map" is all wrong. It's more like genetics--you don't know exactly how the thing will look, but there are certain core traits you know it will exhibit.

As you get users, they start to ask for things, and you start to find adjacent problems that you could solve. Complexity comes naturally and fast. If the base product is already complex any way, the concept will turn to dust when you start adding features and you’ll be eaten alive by scope creep. Below, we almost immediately had to add tabs to the real-time Notes area because people kept asking for them, even though it wasn’t in our dev pipeline.

Like the Terminator, your product should be built to protect people.

Left to their own devices, computing systems (like the rest of the universe) will grow in complexity. The more time they save us in one way, the more they require in another. Call it the “law of thermodynamics of bitch work.” Somehow the more "tools" I had in my toolkit, the less time I felt I had to think and write. For me, Writebot was designed to protect what I value--my creative energy.

The cloud--everything, everywhere!--sounds great until you realize you have duplicate files all over the place and no clue where anything's stored when you need it. There’s stuff archived in Dropbox or Box, versions frozen in email, common docs on the corporate intranet, images, videos, and voice recordings on your phone, and a whole pile of downloads on your desktop. Below, a file bucket exists on every Writebot project.

By 2011, when the idea for Writebot started germinating, news sites like ours had begun to run lots of rich media. But before you see finished articles like this one, the raw materials live scattered across the hard drives of reporters, editors, videographers, photo editors, producers, and developers. Every time you start a big article, you feel like Sisyphus grabbing the boulder: You forage for info, you share things, you brainstorm--and you almost immediately start to lose track of your raw materials.

Someone else probably feels your pain.

When I abstracted out my frustration with journalism, I felt as though I spent half my day digging around my computer to copy and paste things. It sucked, sure, but it wasn’t exactly world hunger. The original Writebot concept was so basic--just a simple way to manage big groups of links--that it was tough to think of use cases outside of, well, myself.

It wasn't until Ananth, who is a former hedge fund analyst and trader, started to consider his own specie of bitch work that we realized other businesses (in the capital markets) were burning time the same way. We thought people in media and finance might be experiencing similar problems. And people in both industries place primacy on speed, so they really, really hate losing time. We started to think of Writebot as a tool that could work for both.

Find the easy way in.

In New York, people generally treat jobs like spouses. If you're going to intercede in someone's work/marriage with a software product, it has to be workflow-agnostic and modular. People need to be able to drop it right in and see if there's an improvement. And if not, rip it out.

Last month we told the pivot story of a Y-Combinator company called AnyPerk, which sells employee benefits packages to other startup companies. Theirs is an ingenious business because they were able to sell to other startups who can change policy quickly. But in New York, where lots of businesses have existed unchanged for decades, there are routines. There are best practices. There is IT policy. There is IE8.

On my trips out to San Francisco, I had met with all sorts of startups building amazing consumer apps. But then I'd go back to my hotel and lose precious ounces of sanity and time to this mundane problem of starting an article. When it came time to improve the process, I just started at the part of the article process that was most unstructured and also most painful--the beginning.

Solve a problem that feels like it’s killing you (or someone you care about).

The start of the writing process made me stressed, anxious, late on deadlines, error-prone, and generally grumpy. It literally cost me income, enough that I took on the opportunity cost of trying to solve it.

You see the same approach in projects like the experimental apartment finder we covered yesterday, built by WSJ reporter and programmer Jeremy Singer-Vine. In Silicon Valley, where things are cheap and space is plentiful, minds wander towards solving inconveniences or using the Internet towards a greater good. A side project in NYC is more like trying to plant a vegetable garden next to a highway--there’s often a shred of desperation.

To stick with the project, you don't just need a core problem that's killing you, but a message of hope about how to solve it. You need to value something that is being threatened by that problem, and your product has to exhibit a hypothesis about how you can protect that thing you value. With Jonathan's app, the theory is that human referral beats algorithms for certain high-value items (like an apartment). The hypothesis behind Writebot is that projects maintain greater momentum when they get going fast.

Even when you make huge pivots like we did--starting with a consumer iOS app and ending up with a real-time web app for enterprises--you'll never feel lost if you are always circling in on the problem. As Gentry Underwood from Mailbox (now, Dropbox) told me last month:

There's this sense that you can take the parts of [your startup] that are working and reapply those parts to a new problem--that you end up having these two degrees of freedom. . . . I think when you have that much freedom, it's really easy to get lost. It's easy to lose your bearings in terms of why you're doing what you're doing. With that goes, I think, a lot of the motivation to suffer through all the hard parts that are inevitable, no matter how close you are to an actual solution. In our case, we pivoted from Orchestra to Mailbox, but we pivoted in a way that it was pretty different in a sense that we never let go of the "why" that we were trying to solve when we started. . . . The problem was always the same consistently throughout. I think that's central to why we were able to finally find product-market fit--because we tried a lot of different stuff, but it was always attached to the same problem. It wasn't like we were truly floating along the way.

Don’t ignore the mess made by the last generation of software tools.

Business used to move at the pace of its systems. You compiled your report, mailed it off, and had a cocktail while waiting for the Chicago office to read it two days later. But work happens so fast, and we all manage so many systems, that the time it takes you to dig around and copy-paste stuff is actually costing money.

Complex businesses like the ones that are mainstays here in New York--banking, media, fashion, consulting--need simplicity more than ever. The tools that were boons a decade ago are sucking the life out of them now. What costs one employee a few wasted minutes scales to enormous waste across an enterprise. You can’t even think about solving this schema of problems unless you acknowledge and work within legacy systems.

Financial institutions love Bloomberg because it's a super-efficient market data tool. But it's just a data feed--it's not efficient at putting together projects. There's chat and email-like functionality, but no content or idea production. In the decades since Bloomberg launched, the reports, models, research, and pivot tables that analysts, traders, and sales people create every day have become scattered across email, chat, intranet, and local hard drives in all sorts of formats--PowerPoint, Word, PDF, email, pivot tables, screenshots.

When it comes time for work groups to manage clients, stuff gets lost, and in a digital cloud-based world, customers don't tolerate that excuse anymore. If the startup-minded kids at Stanford had any idea how neolithic the processes were inside big business today, they wouldn't build dating apps.

[Image: Flickr user Doc Searls]

Inventor Of Oculus Rift: The Future Of Virtual Reality Is Social Networking

$
0
0

Palmer Lucky is a California native who had an affinity for computer hardware, and for games. This led to him thinking about the bigger picture for this tech. “I had been building computers for years, spending all of my money on new graphics cards, new monitors, new input devices. So I started thinking, where will this actually be going in the long term? What would be the end game for gaming?” says Lucky. “It’s probably virtual reality, something like the Matrix: You plug in and you are inside the game. I started to look at old VR to see how they tried to do this in the past. How did they fail? What did they do right?”

So Lucky began creating prototypes using cell phone parts and other off-the-shelf computer parts. As we have relayed before, this led to some prototypes, which caught the attention of game developers, then game journalists. Soon Lucky and a few others founded Oculus VR and raised millions through Kickstarter. In March, they delivered development kits for game makers to start putting games together.

Lucky says, “The dev kit right now is not a perfect gaming experience, but it is a great tool for people to make VR content. We have great, high-precision, high-speed head tracking. We have a very wide field of view. And those are really two key components to start with. And then we have an SDK that makes it easy for people to integrate and make VR games. In the past, they all had to figure it out on their own, over and over again.”

But he is the first to admit that this version of the Oculus Rift Headset isn’t perfect. “The things I’m not so happy with? The low resolution of the current screen—that’s what was available then. It’s only just now that better screens are starting to become available. Screens are going to continue to improve as time goes on. It will fill out those last few bad parts of virtual reality tech as it stands today,” says Lucky. The company has already begun to show off a prototype with a higher resolution screen, something the consumer version will have when it is released, probably next year.

So what is the future for the Oculus Rift and virtual-reality tech? “A unified VR platform that allows people to interact with the virtual world in the exact side same way that they do with the real world. I don’t think we will get all the way there, but I think we will get much closer than we are today,” says Lucky. “Right now, gaming revolves around using a controller to control an abstract representation of somebody on a flat screen. Once you can trick your brain into thinking you are inside your game, using input and output schemes that mimic real life, give you tactile feedback and stimulate more senses, we can start moving toward having perfect virtual environments.”

An integral part of the future is the input schemes he mentioned. Lucky adds, “One of the things about VR is that it’s very natural for people to look around. Even normal people who haven’t learned to use a controller, they have the muscle memory to look around and track stuff,” says Lucky. “So if we can make interfaces that are the same as reality, that means anybody can use this technology and they can use it with a higher level of precision than a controller. As long as you are controlling something else on a screen instead of actually feeling like it’s your arm moving in space, and having all of those instincts on how you need to move, it’s not going to be the same thing.”

Once the VR tech is cheap and widely available, and the natural interfaces exist, the sky is the limit for virtual reality. And the potential is for more than games. Lucky says, “People have been using VR, even in its high-end expensive state, have been p=using it to treat it for phobias. Also people are using it for post-traumatic stress treatment, trying to help people who come back from war. There are a lot of people who use it for data visualizations, trying to make sense of huge data sets that are very hard to interpret or comprehend the scale of. And people have been experimenting with it for pre-visualization in movies, having actors acting on a green screen being able to visualize what should actually be happening, so they can react better.”

But what Palmer Lucky is most looking forward to is how Oculus Rift and other VR tech could change social interaction. “Right now you have very abstract social networks. So it will be really interesting to see what happens if virtual reality ever progresses to the point where you can have a very realistic way of interacting,” says Lucky. “The only difference is that you can be whoever you want to be, instead of whatever cards you got dealt in real life. It’s the stuff of science fiction, but we are not too far away. People already spend hours a day on Facebook. What if it was truly engaging and immersive, rather than a filtered version of your real self?”

[Image: Flickr user JD the Photog]


Does Exercising With Apps Actually Get You Fitter?

$
0
0

Here at FastCo.Labs, we've been pondering how to cover wellness--you know, fitness, nutrition, stress management--in a way that makes sense for people on technical teams. The idea got me wondering whether all these "quantified self" apps were actually a useful way for people to get into fitness, or whether they were just toys.

So I asked Jason Jacobs, founder of the super-popular fitness app Runkeeper, if he thought there might be some evidence that using an app can actually help you get more fit than training without one.

As it turns out, he hadn't looked at the data this way, but his reply was encouraging: "I bet if I went back to the database team and had them run some queries to try to look at kind of trends in the aggregate," he said, "we could figure out whether their usage patterns remain consistent, versus improving either in frequency or in distance or in pace." It's not a controlled experiment, to be sure, but it could be interesting.

If users jumped in fitness and then flatlined, then the app probably wasn't helping their fitness goals. If they continued to improve, then at least it was evidence something was working. Even without a control group of non-app-using runners, we could at least get an early peek at whether apps like Runkeeper are actually conducive to continued improvement.

A few days later, I got an email from Runkeeper Director of Analytics Sandeep Hazarika, with some encouraging results. The performance metrics that he analyzed: average total distance tracked per week, average longest distance tracked per week, and average pace per week.

This analysis looks at January 2013 registered cohorts who had at least one trip in each of the months of February, March, and April (Note that three consecutive-month trippers used to be tracked as "Engaged" users.)

Average Total Distance: The improvement from week 0 to week 11 was 26% (from 3.2 miles to 4.1 miles)

Here's the data in graph form:

Average Longest Distance: The percent improvement from week 0 to week 11 was 20% (from 3.8 miles to 4.6 miles). It is interesting to note that the improvement in longest distance plateaus around week 9, and there is a slight decline after that.

Average Pace: The percent improvement from week 0 to week 11 was 11% (from 12.93 minutes per mile to 11.53 minutes per mile)

Jacobs says that up until this point, he had only looked at anecdotal evidence about which features were motivating people--not the overall success of the app itself, versus training without the app.

We know what kinds of things motivate different people. We know that some people are more motivated by guidance, other people are more motivated by things like social accountability, or some people are motivated by free stuff, so by tying in with rewards systems, we take advantage of that. That's really motivational to them. For example, we know that once you have X number of friends, you are going to be far more engaged than if you didn't.

But back to the original question: Are there commonalities between iterating upon your software project and iterating on your body? Mike Oliver, Runkeeper's lead iOS engineer, thinks there might be--at least as far as milestone thinking goes:

As engineers, we have a very specific mind-set. We're scientists at heart, and we look for specific evidence to prove something and then use that evidence to do something. When I go out and run, I expect to run a very specific set of miles for a very specific goal and purpose. I don't know that's necessarily true of non-engineering mind-sets. Somewhere in the back of my mind, the engineer screams "this number of miles is equivalent to exactly this number of potato chips or cookies you can eat later on!" If you just push past an extra street, you can visualize the cookie right there. I think a lot people visualize like that, but engineers even more so.

Are you an engineer and also a fitness buff? We'd love to hear your thoughts about the topic. Tweet me @chrisdannen and let me know.

[Image: Flickr user Ed Yourdon]

What Happens When Robots Eliminate All Our Jobs?

$
0
0

In his weekly column for the New York Times, Paul Krugman talks about a problem that is likely (or at least should be) on the minds of many technologists: Companies are making more money today on the back of fewer laborers, and it’s not just low-skilled workers who are being displaced:

...some of the victims of disruption will be workers who are currently considered highly skilled, and who invested a lot of time and money in acquiring those skills. For example, the report suggests that we’re going to be seeing a lot of “automation of knowledge work,” with software doing things that used to require college graduates.

More alarmingly, unlike in the past, displaced workers aren’t finding new jobs. In fact, given America’s prolonged unemployment problem, it now seems likely that those jobs will never come back. They’ve been replaced: Sometimes by cheaper workforces overseas, but increasingly by technology.

So what do we do about it? Krugman’s solution is to create a new kind of welfare that guarantees a basic level of income for everyone, paid for by taxing corporate profits that are increasingly being driven by automated workforces. That might make sense in the short term, but it’s incredibly myopic.

A more interesting suggestion for temporarily mitigating the problem is to decrease the length of the workweek from 40 to 30 hours, which was successfully implemented by Kellogg’s, the cereal-maker, in the middle of last century. In 2000, France reduced its workweek from 39 to 35 hours with few ill effects, and President Nicolas Sarkozy’s reinstatement of the 39-hour week did nothing to help the French economy when it tanked during the 2009 fiscal crisis.

Reducing the length of the workweek has two effects: it brings more people into the workforce by creating new jobs to replace the lost hours, but it also incentivizes workers to choose not participating in the economy over consumption, which is exactly the opposite of what most people choose today. In fact, the Orion article argues that working long hours is actually less efficient, and driven by the need to consume excess production:

By 1991 the amount of goods and services produced for each hour of labor was double what it had been in 1948. By 2006 that figure had risen another 30 percent. In other words, if as a society we made a collective decision to get by on the amount we produced and consumed seventeen years ago, we could cut back from the standard forty-hour week to 5.3 hours per day—or 2.7 hours if we were willing to return to the 1948 level.

If more people start to choose leisure, it could help wean the country off of the notion that everyone needs to be maximally employed for the economy to function. Getting used to that idea is the remedy that most economists worried about the issue suggest as a long-term fix:

(Columbia University Economist Jeffrey) Sachs and (Harvard Labor Economist Lawrence) Katz are somewhat more hopeful, but their optimism is based on the politically problematic proposition that the United States can adopt wage and income policies similar to those in Scandinavian countries.

But those solutions still don’t address the root of the problem: we simply don’t need as many people to work anymore. Why? Mostly, because jobs and profit are not inherently connected. Humans have traditionally been the most efficient way to produce goods, but with increased automation, that’s starting to change, and that’s what Krugman is really writing about.

What happens when humans no longer need to work for the economy to function? That depends on whether or not you believe scarcity is inevitable. Most economists who do propose solutions similar to Katz and Sachs’ aggressive wage policies, and argue for a best-case scenario where we are all partially employed. But there are a rising number of people who believe scarcity might not be permanent.

Izabella Kaminska of FT’s Alphaville argues that, eventually, technology will advance to the point that we will be able to produce goods virtually for free. Labor will be removed from the system altogether, and everyone will be provided with a baseline of goods and services to keep them alive and allow them to pursue what makes them happy. In his book Makers, Cory Doctorow envisions a society where small teams of hackers invent new economic systems as they go, using new accessible production tools like 3D printers and easy-to-assemble microprocessors.

In these scenarios (Makers' rough parts notwithstanding), the future is literally up to us to invent, and for technologists, that idea ought to be the most intriguing of them all.

[Photo by Flickr User Alden Jewell]

How This Ad Agency Is Measuring The Value Of Awesome

$
0
0

"We like proving that the stuff we think is awesome is worthwhile," explains Charlie McKittrick, co-head of strategy at Mother, the ad agency. "My greatest fear is of being full of shit, or selling vapor product,” he tells me in his midtown office. “We wish we had a measure for success in experiential marketing."

What kind of experiences are we talking about? Imagine meeting Little Marina, the fashion blogger from Varese, Italy who hyped Missoni's 2011 collaboration with Target. Marina flew out to New York's Fashion Week on Target's dime, and turned out to be an enormous marionette. She handed out huge business cards listing her Twitter handle and phone number for text-messaging, and held real-time chats with everyone from Harper's Bazaar editors to super-models to crowds of kids. She carried a free, open Wi-Fi hotspot. News outlets covered her globally--"2.8 billion media impressions to be precise," as Mother's publicity folks say, "which made her the biggest blogger Fashion Week had ever seen."

So Marina had an impact. But was she worth the millions of dollars she cost Target?

Circle Media: Measuring the Unmeasured

Circle Media, the venture McKittrick pitched at SXSW in March, aims to measure brand engagement at events. Who comes? How do they respond? What do they say or share or photo-shoot? Do they change their opinions, how they shop, or what they buy?

Ideally, Circle Media wants to boil all these factors down into an index. Similar to what Nielsen's produces for conventional advertising, this easy-to-read number, like IQ, could tell ad-makers and their clients what degree of cultural splash a particular event creates.

“We bring the measure to unmeasured media,” the company's CEO, Mark Piening, explained there, where his company placed second in the Austin Startup Pitch Competition. “We want to help brands understand what audiences want," he said.

The Datastream Around A Product Experience

The key questions, as McKittrick sees it, are: "Did I get the right people? Did they share what they saw? Did they hear the story I wanted to tell?"

What you see when you look at the Circle Media software's dashboard is a reflection of these questions. The bars are called “Scorecards”: The first score, "Audience," tracks ticketing data, as well as aliases gathered from attendees' email addresses or Twitter handles, and their professional affiliations--e.g., news media, tech-developer, fashion figure--inferred from public Twitter self-descriptions. The second scorecard, "Sharing," aggregates volumetrics on Twitter, Facebook, Instagram and other media-sharing platforms, and distills these into one metric of social buzz. Finally, the third, "Preference," addresses the content, not just numbers: what people said about the event, and what they paid attention to.

This last metric relies on semantic analysis software to classify statements like "Dude you gotta see this," or "I hate Apple" into positive, negative, and neutral-brand statements. Joe McCann, the creative technology director, is also working on visual-image recognition software: algorithms to identify every time a company's logo or product is photographed on Instagram. Training an algorithm like this is tricky for machine-learning, so he's using the crowdsourcing program Amazon Mechanical Turk: Using Turk, he can pay strangers a few cents apiece to label photos that contain, for example, the image of a Microsoft computer, or a bottle of Coke. As his database of labeled photos accumulates, his algorithm becomes better and better at trolling Instagram or Facebook and tabulating every snapshot of a brand's label or product taken at an event.

Yet another component of the data stream we used to call experience.

Down The Rabbit Hole: Experiential Marketing

Little Marina and Microsoft’s Microtropolis are two samples of experiential marketing: massively expensive, grandiosely creative installation-style ad campaigns piloted by Mother.

Such projects are attended by few people, but targeted at influential audiences. They depend on spontaneous social and media buzz--Twitter, YouTube, Instagram, news, and industry chatter (e.g., among the fashion or tech worlds)--to spread the message. This is "soft power" marketing--less intrusive, more friendly than conventional ads. But how well does it work?

The companies who hire new-generation advertisers want to know how much the sponsored events actually affect people's thinking, not to mention sales. They don't want hand-waving or pretty pictures. They want numbers. This isn't just a fringe trend, either: The future of advertising looks to be not print or TV, but experience.

Yup: Traditional Advertising Is Pretty Much Dead

When you imagine advertising, what is your first association? Mad Men, maybe: The dated office culture of a sleazy business. Or, those lame clips of ex-golfers with arthritis who interrupt the YouTube clip you're watching to prattle about a medicine that's irrelevant to you. Or the list of corporate sponsors you fast-forward past at the beginning of your podcasts.

Even if you recall the ads later, your first impression of the brands is hostile: We don't grant advertisers the right to interrupt our news or entertainment anymore, especially not to sell us stuff we don't care about. We want ad people to know us better, and charm us with less predictable pickup lines, less intrusively. We're forcing marketers to get more creative, and to pay more attention to us: to what we expose about ourselves by how we behave, online and off.

What this means is marketing is becoming more Big Brother, but also more fun. The lines between advertising, entertainment, social media, and journalism are getting blurrier, as the tools for publishing content, attracting buzz, and judging response, are becoming universal. A niche is opening up, meanwhile: How to measure the impact of "Experiential Marketing."

What The Future Of Advertising Might Look Like

Mother New York's office looks like a mix of an art studio and a restaurant: The red British phone booth by the entry connects to Mother's home office in London; there's a taxidermied bear on the other side of the door. The lighting and mood of the place is cheerful, dress is hipster-casual. Plenty of Converse or Vans sneakers, tattoos, and colorful shorts, not unlike the vibe of a magazine. The bar serves Stella Artois to visitors.

I'm sitting under a mosaic of photos of mothers of Mother employees, one in a swimsuit with a pregnant belly, perhaps containing a future Mother ad person. Another mother, I'm told, is a Playboy bunny. Mother employees' business cards also have their mothers on the back.

Charlie McKittrick and Joe McCann continue to tell me how they imagine the future of ads: Advertising through events like Little Marina or Microtropolis, McKittrick points out, is much, much longer exposure than a minute-long TV spot, radio commercial, or magazine page.

"The stimulus has more data in it," says McKittrick, whose Cognitive Science studies at Boston University with philosopher Daniel Dennet may partly explain his geek's enthusiasm for hunting patterns in social datasets. "The brand has behavior and personality" expressed over time.

"Imagine Governor's Ball," says McCann, the developer who moved from Austin to Mother's New York office in September to help build Circle Media. He’s talking about the New York City music festival. "People's Instagrams are all muddy sneakers. You assume they're in a bad mood--"

"Unless they're hashtagged '#baller'," McKittrick interjects from the other couch, here in Mother's New York headquarters.

Each moment of an event--a music festival, say, or a fashion show--is packed with data on people, the two tell me. This data is readable from mobile devices: position; mood (from Twitter messages or Facebook posts); attention (what you're Instagramming or sharing about). One can imagine a graph of an event's social buzz, @mentions, # hashtags, Instagram shots, et cetera over time, in terms of any number of brands. Or, for that matter, bands. What are people saying about the Flaming Lips? Or MGMT? High-influence attendees could be tracked by their Twitter handles, to know whether the important people are paying attention.

That’s the future according to Circle Media, anyway.

[Image by MitsyRetro on Flickr]


Data Analytics 2.0: Why We're Covering This Story

Data Science is impacting media, as the technology for classifying brain-states and tracking people’s interests advances. Tweets, Facebook “likes”, eye-movements, brain-waves, and blood-flow in cortex: all inform scientists about what you’re feeling about what you’re reading right now. We’re here to report how this data is shaping the future of media, and how it may help long-form narrative survive. Armed with equal parts curiosity and skepticism, we’ll tell you what’s real and what’s over-hyped bunk.


Previous Updates


The Airtight Filter Bubble: Inside The Science Of Morality Mapping

July 2, 2013

What if you could measure the moral slant of media?

You'd dip a digital thermometer into the script of, say, a Woody Allen movie, a broadcast of Rush Limbaugh or Jon Stewart, or a Michael Moore documentary, and see a readout of where it fits on the spectrum of values.

Morality metrics could judge spin automatically. Censorship apps could be made for parents to protect their kids, or viewers to screen for their own worldview. Policing apps could be made to detect ethical bias in programming. And, as John Voiklis is planning in his new position at Brown University, "ethical robots" may be designed, sensitive to the ethical tone of language: Bots programmed to take offense at insult, or to fight for good.

This moral "coloring" of words is what Voiklis studies for a living. On The Ripple Effect blog, the online outlet for the social impact research firm Harmony Institute, Voiklis reported this week the results of a study he's been doing on the moral dimensions of network television. His coding system, coloring words in six dimensions, expands work on Moral Foundations Theory by psychologist John Haidt at the University of Virginia.

Voiklis' moral dictionary now consists of around six thousand words--approximately 10% of an educated person's vocabulary, Voiklis says--which three judges have spent hours labeling for moral coloring. Armed with this word database, Voiklis' algorithm takes in a TV script and outputs what proportion of its words fall into each of the moral bins: this "moral fingerprint" can be compared to that of the average liberal, moderate, conservative or libertarian to determine a show's or network's political leaning.

Your Values In 3-D

UVA's Haidt has found that moral language aligns along six axes: 1) Security (Kindness vs. Cruelty); 2) Justice (Fairness vs. Prejudice); 3) Community (In-group Loyalty vs. Betrayal); 4) Authority (Respect vs. Affront); 5) Purity (Innocence/ sanctity vs. Corruption); and 6) Autonomy (Freedom vs. Coercion).

"Morality space" can be mapped in these six dimensions--like Manhattan's length and width, Voiklis says, plus four more--and the distance between two TV shows, movies, networks, or writer's worldviews, can be visualized, almost like an address, in this space.

You can determine your own moral bias through the online survey YourMorals.com. The questions, each targeting a different dimension of the model, are like those on a personality test, or an online dating profile. You rate statements like "whether or not someone showed lack of respect for authority" on their "moral relevance" to you and others like "Respect for authority is something all children must learn" on how much you "agree," both on a scale from 0 to 5. Each statement applies to a moral dimension, and overall the survey determines which values you prioritize in your moral outlook.

Are Values Innate--And Can We Teach Them To Machines?

John Haidt believes the dimensions of morality are innate. But the relative importance of each dimension varies across cultures, and across political ideologies.

What his original Moral Foundations Theory study showed is the ethical coloring of political orientations. People who filled out the YourMorals questionnaire also gave their ideological sympathies--liberal, moderate, conservative, or libertarian. What Haight found is that, generally, liberals value "Security" and "Justice", while conservatives' moral systems are biased toward "Purity", "Authority", "Community."

Voiklis's new study for Harmony Institute has extended the political mappings onto TV content. His main finding: Contemporary television language differs most from conservative viewers' values. This lends some quantitative support to Republican complaints that the media is "liberal-biased." But on the other hand, conservatives were moral outliers: the pattern of their moral outlook differed not only from liberals, but from moderates and libertarians as well. This can be seen as quantitative support for the impression made by last weeks' Supreme Court rulings on gay-marriage and immigration: that the conservative worldview has lost touch with modern times.

Can A Machine Understand What “Moral” Talk Sounds Like?

All this theory is very abstract, though. What does moral talk sound like? Let's hear some hate speech for a taste. Here's fashion designer John Galliano in his racist meltdown from February, 2011 in Paris, as quoted by Vanity Fair in its interview with him this month:

"I love Hitler. People like you would be dead. Your mothers and forefathers would be f***ing gassed and f***king dead today... You're ugly." To an Asian-Jewish couple at the same cafe, he said: "Dirty Jewish face, you should be dead" and "F***king Asian bastard, I'll kill you." As Galliano explained on June 13 in his first sober television interview, he was not only wasted, but deep in the throes of blackout alcoholism and addiction to benzos at the time. He'd been researching an anti-semitic figure whose epithets he says his drunken mind randomly free-associated to.

The morality-robots Voiklis is building might one day be able to show, from a record of all of Galliano's public speech, that his drunken rants were a moral outlier--a color uncharacteristic of his personality and beliefs--or not.

How Would Machines Ever Understand Kanye West?

Voiklis concedes that his approach to reducing linguistic meaning to numbers has its detractors. "In the humanities, there still persists this idea that 'we are not measurable. If you try to measure me, you're just being reductive.'" In fact, one of the judges Voiklis used to determine the ethical valence of 10,000 words from the dictionary is a humanist who is openly hostile to his approach. "But," Voiklis adds, laughing, "she poured in two weeks of work in her free time for me. It's always good to take in that skepticism from that circle of people, so you can show that you're taking a multidimensional approach."

Even if you're open to a social scientific perspective on media, you can see how reducing the ethics of language to numbers runs into trouble.

Take, for instance, Kanye West, riffing about modern-day racism on his new album, Yeezus, in the song "New Slaves":

"My mama was raised in the era when/ clean water was only served to the fairer skin/Doing clothes, you woulda thought I had help/But I wasn't satisfied unless I picked the cotton myself/You see, it's 'broke nigga' racism,/that's that 'Don't touch anything in the store'/and this 'rich-nigga' racism/that's that 'Come in, please buy more./What you want? A Bentley? Fur coat? A diamond chain?/All you blacks want all the same things."/Used to only be niggas, now everybody playing/Spending everything on Alexander Wang/New Slaves."

These lyrics are packed with plays on words--sarcasm, cultural references and evocations typical of the way people speak. How could a machine understand the statement "clean water was only served to the fairer skin", without knowing the historical context of Jim Crow? How would it get the bite of "they wasn't satisfied unless I picked the cotton myself" without knowing about the cotton trade and slavery? How would Voiklis' ethical robots deal with something like this?

We'll be following Voiklis' work moral robots at Brown, along with Harmony Institute's latest in measuring the impact of movies, TV, art and news. Stay tuned for more as we update this story.


Why Globalized Apps Need “Magnets Of Meaning”

[/hr]

June 20, 2013

I've been a foreigner a lot. Japan was home for three years, France for six months in college. The rush of trying to stay on top of a fast-moving cultural wave is addictive. Channeling a foreign frequency--the alternative rhythms of a place that's not your own--I've found transformational. You feel another culture changing you, your personality evolving, when you become a code-switcher. You develop new sets of symbols, new associations and priorities, new ways of seeing and expressing your self.

So I was fascinated to read about a new study, co-authored by Chinese-American Shu Zhang, a Columbia Business School student, and social psychologist Michael Morris, also at Columbia, which showed how a bilingual person's brain gets hijacked by symbols, tripping up her two linguistic selves and slowing her down. Native Chinese speakers are 11% less fluent in English (in words per minute) when talking to a Chinese-American face named Michael Lee than to a Caucasian one with the same name, and produce 16% more words describing American symbols in English (White House, Marilyn Monroe, Superman) than Chinese ones (Great Wall, dragons, yin-yang). When looking at China-evocative pictures, they are also 85% more likely to use a literal translation of the Chinese word for an object rather than the English.

Cultural symbols here are like corporate logos: Picture Apple's icon, the McDonald's arches, Nike's swoosh, Lacoste's alligator, Playboy's bunny, the Twitter bird, Starbucks' lady, MTV's letters or the red Solo cup you drank all that booze from in college. Doesn't each trigger a flood of personal memories? All those papers you wrote at Starbucks; the preppy frat guys you knew (or were) in popped-collar polo shirts; the shame you still feel about those magazines you took from dad's underwear drawer in fifth grade... The brands that invented the logos want them to sell more products. But, this study suggests, depending on the audience, they may be distracting.

The discovery that cultural cues affect language fluency so dramatically has significance beyond second-language speakers. We are all producers and consumers of speech, writing and images. We're all trying to tell succinct stories, whether to convey news, to entertain, to educate, or sell something-- and we often don't think enough about the associations that the images we use might have. How many different places our words and images might transport our audiences, depending on where they're coming from.

Japanese images are time-portals for me now. When I walk through the East Village past red-paper lanterns, I remember the outdoor onsen volcanic baths where my Japanese friends used to take us--a regular Japanese tourist destination, where families bathe in the river at night, by lantern light. The Ippudo ramen near NYU sends me back to the Momotaro-dori, or "Peach-Boy Street," the main avenue in my old prefecture's capital city, Okayama, where we used to go for noodles. Peach-Boy's image itself is evocative: whenever I see pictures of him in sushi restaurants, walking with his dog, bird, and monkey, my mind is flooded with memories of meeting Okayama friends around the Mototaro statue in front of the eki--I mean, train station.

But to you, assuming you're not Japanese and haven't lived there, Momotaro's image is just a naked Asian boy. What the new study suggests is that that blip of recognition--that jarring culture-clash of a symbol out of context--may basically clutter my communication channel with noise. These visual associative cues can communicate rich webs of content fast, but they can also interrupt fluency if they trigger the wrong chain.

"Understanding how these subtle cultural cues affect language fluency could help employers design better job interviews," ScienceNOW reporter Emily Underwood explains. "For example, taking a Japanese job candidate out for sushi, although a well-meaning gesture, might not be the best way to help them shine." The same lesson applies to media: displaying Japanese imagery, for example, to a Japanese viewer or reader of English will likely create unanticipated hurdles to comprehension. Picking the right chain of associations, on the other hand, in the pictures we use, might promote meaning.

Language-teachers and -learners have long known that immersion is the best and most efficient way to learn a foreign language. The Rassias Foundation at Dartmouth, where I learned my first words of Japanese in a two-week bootcamp where English was outlawed, and the Middlebury Language Schools (motto: "No English Spoken Here") have earned their reputations on this principle. The fact that English in Japan does not tend to be taught this way, but rather with lessons on grammar and vocabulary taught in Japanese, supplemented by visits by native-English-speaking assistant teachers, may be why Japan ranks only 22nd of 54 countries in English proficiency, according to Education First.

Immersion has the obvious advantage of heavy exposure to the sounds of a language, and forced engagement. But the new study, published in the Proceedings of the National Academy of Sciences, suggests that there's something deeper-- and not strictly linguistic-- about immersion abroad which facilitates fluency in a new language.

Speaking a foreign language is more social than linguistic; more like swimming or riding a bike than memorizing multiplication tables. Motivation and expressivity are key: having something you want to say, and a person you want to tell. You learn a language fast when you're dating a foreigner, or when you're one of two western men in a rice-paddy village, and want to make new friends. What you're doing in those situations is marinating in associations--soaking in experience.

You talk in new ways, not just new words, when you live immersed. You make friends differently, date differently, work differently. I'd never bowed to my boss before I met the mayor of Yakage. I'd never been called "sensei" or bowed to every morning, until I met my elementary-school students. I'll always remember my Japanese friend telling me to pay more attention to the beer in my dinner companions' glasses, so I could refill them (no Japanese person would refill her own glass, considering it rude), and since then I've been sensitive to others' bi-ru when I'm in Japan, but not in New York. The same beer glass in Osaka or New York is a different beer glass. Or as Basho put it: Even in Kyoto/ when I hear the cuckoo sing/ I long for Kyoto.

Images, whether in pictures or in words, are potent evokers of meaning. Webs of associations dependent on culture, they can animate or distract from our message--a crucial lesson for us culture-makers to keep in mind.


[Image: Flickr user N1NJ4]

Announcing Divvy: The App That Won The Co.Labs And Target Retail Accelerator

$
0
0

The Co.Labs and Target Retail Accelerator challenged entrants to design and build an app that would extend the Target customer experience into new areas, leveraging mobile software--native or web-based--to produce new and pro-social effects in their community, family, school, or social network. Our celebrity judgesselected their finalists, who each received $10,000 seed money and a Target mentor for the next stage. But it was an app called Divvy, designed and built by a distributed seven-man team calling themselves Team Pilot, that will receive the $75,000 grand prize.

The field was extremely competitive: Our Retail Accelerator drew more than 350 registrants and 76 fully spec'd app entries between March and April 2013. Seven finalists were selected in May by a panel of industry experts including Target’s head of multi-channel Casey Carl; Matt Mankins, CTO of Fast Company; Tom Preston-Werner, CEO of Github, industry standard for code repositories; Ruchi Sanghvi, VP of operations for Dropbox; and Suhail Doshi, CEO of the cutting-edge analytics company Mixpanel.

For our finalists, Target opened up its entire e-commerce API for the first time, allowing complete programmatic access to the same endpoints used by Target.com and the official Target apps. Armed with access to Target data--product descriptions, SKUs, store location and other perfunctory information--competitors were challenged to design and build apps which would use the Target APIs in new creative ways, mix them with other extant APIs, or create a new kind of user experience on top of Target infrastructure. All in less than 90 days.


All About Divvy: How The Winning App Works

Divvy is a social shopping list that solves a nuanced problem: How to make group shopping with an app easy. But not just easy, easier than it would be without any app at all. Real-life obstacles to group shopping such as splitting the bill, sharing copies of the receipt, maintaining shared transaction history, and earning rewards points are difficult to solve programmatically. But they’re even harder to squeeze into a mobile interface in a way that feels frictionless.

Divvy won over the judges by attacking that friction head-on: It solves what we’ll call the "Mint problem.” That refers to the necessity to manually track items you buy inside the app; the sobriquet refers to the personal finance app Mint, which only receives basic information about your purchases a credit card processor, and asks the user to categorize them by hand. Mint is a fantastic utility owned by a publicly traded personal finance company, Intuit.

Goal Of The App

Divvy's reason for being is to remove all the minor inconveniences and deal-breakers from group shopping. To make it practical and rewarding enough that people will actually do it. The upshot is obvious: Less trips to the store, less time wasted shopping, and a more transparent budget/expenditure situation for your family, group, or team.

Divvy also makes sure that users don't lose out on any of the perks of shopping solo. They still get their receipt, they still can connect their purchase history to a Mint account, they can still accrue rewards points on purchases, and they can still have easy access to repurchasing items quickly, as they would on their own discrete Target.com account.

The Use Cases

One of the beauties of this app is its flexibility. Group buying is organized smartly around shopping lists, not permanent user groups, making it easy to make ad-hoc groups.

Imagine a family out and about their daily activities. One family member decides to take a trip to Target. She can add other family members to the shopping list, allowing them to contribute items. Or she could add a friend just as easily, without having to permanently add that newcomer to any kind of closed user group. The family member taking the trip to Target collects and buys the items, and the other family members or friends can settle up in-app, right away, along with receiving a copy of the itemized receipt and appropriately distributed rewards points.

Divvy’s Clever Twist

Divvy was one of the best developed concepts we received, and it held several surprises. One notable twist was a well-placed invitation system which uses email as a call to action to download the app and participate in shopping lists. Few if any of the other apps had such a well-considered distribution strategy. Other impressive features include the use of QR codes, which the app uses for returns and discounts, as well as small visual details, like the perforated edges along the perimeter of the table cells in certain list views.

The Judges' Early Feedback On Divvy

While we received a variety of shopping-list app proposals, this one was the best implementation, according to our judges. Several of them suggested there is even more potential here for features than the team included, which alludes to the fact that this team knows how to build an MVP app without getting carried away with excessive features (even though options abound). Their mockups and their team roster show they can execute on their designs. Another nice feature of their entry: clear descriptions of functionality and features. "A Divvy member is a friend that has accepted your invitation to join your Divvy [shopping] list," the opening slide of their walkthrough declares. That's the kind of clarity you need when presenting a new concept to users, and we hope it comes through in the final app.

The Obstacles Team Pilot Overcame As A Finalist

Some judges were skeptical that, despite the clearly well-designed and well-considered execution here, in real life this app just wouldn't hold up to everyday use. People are already fairly set in their ways, and it just may be that a paper list or ad-hoc text message exchange can take care of this problem for all but the most avid Target shoppers.

But after a development sprint in the post-finalist stage, aided by Target mentors which were assigned to each finalist, Team Pilot was able to complete the app to such a degree that work is already underway to release it to the public under the official Target brand. It may be Target’s fourth mobile application; the others include the official Target e-commerce application, Target Ticket (currently available only to Target team members, as it's being tested), a streaming service providing instant access to new movie releases, classic films and next-day TV, and Cartwheel, which allows shoppers to curate discounts and organize planned purchases.



How Divvy Came To Be The Winning App

This seven-person team is an amalgam of developers and designers led by Chris Reardon, a user experience expert with 15 years’ experience and the team’s strategist. Reardon imagined the initial concept for Divvy, but the core concept was developed fully through group ideation.

The team came together from disparate parts of one advertising agency. Chris Reardon, Erick Kopicki, and James Skidmore work at TBWA-Chiat here in New York. Juuso Myllyrinne, Charlton Roberts, and Chris Kief work at Pilot, a NYC-based product development outfit owned by TBWA but operated separately. Steve White works for the TBWA-owned firm Integer in Colorado.

This wasn’t a pre-existing team, although each of the members has worked with one another on projects piecemeal. It was Chris Reardon who pulled Team Pilot together specifically for this project. (They borrowed the name "Team Pilot" from the Pilot shop where Myllyrinne, Roberts, and Kopicki ply their day jobs.)

Make no mistake: This is a team with specialized individual skillsets and a deep knowledge of their fields; not only that, but our interviews revealed a real creative cohesiveness which comes across saliently in the designs for the app itself.


Team Breakdown: How Each Member Made Divvy A Winner

The team that built Divvy had worked together before in fits and starts, but never all at once in any official capacity. Here we’ll get to know each of the team members, their contributions, quirks and--most vitally--their philosophy on usability.

Chris Reardon

Chris is a user experience designer with over 15 years of experience, nearly a decade of which he spent as a creative director for print publishers. He acted as project lead and creative director on Divvy, designing the wireframes--a kind of blueprint--and planning out the app’s task flow: The path users would take through the app to achieve the desired result. He lead brainstorming and ideation sessions with the other six members of Team Pilot, where they rapidly generated ideas, tossing most away and culling only what was necessary. Their model: Kill ideas quickly when proven they would fail. It wasn’t until the third major round of ideation that the team solidified their concept, Reardon told FastCo.Labs by phone; the team wanted to build something that wouldn’t require users to change their shopping behavior, but instead extended it in a way that solved significant problems.

Selling the idea to the judges required matching the Divvy prototype to Target’s branding and meshing with the retailer’s existing digital properties. His other side projects include a “digital receptionist” app for unmanned buildings, and other projects he says can help people form new habits or breaks bad ones.

Chris Kief

Chris Kief was the technical lead on Divvy and responsible for their overall technical strategy. Kief worked heavily with Target’s API, which wasn’t publicly available, but told FastCo.Labs he was amazed by all the product information that could be accessed through it. He also worked to smoothly integrate Divvy with Target’s debit and credit cards, REDcard.

Kief was responsible for one of the most difficult strategic decisions the team made: Build Divvy as a native iOS application, or an HTML5 web app that could be loaded into a web view? With HTML5 they were able to prototype much more rapidly and build a more feature-complete prototype, all without necessarily losing App Store distribution--by building a simple container application, Divvy can still look and feel like a native app downloadable through iTunes, but its contents are all stored remotely and served up over a wireless connection.

One major trade-off building the app in an HTML5 web view would be the inability to access contacts in iOS, since Apple doesn’t make data available via the browser. It’s the kind of decision that seems counterintuitive: Social connectivity is at the core of what Divvy is. But in the end, they were able to just mock up the contacts functionality--which is easy to understand--and develop a more robust app to show off Divvy’s innovative functionality. When he’s not coding, Kief is an avid mountain biker and is partial to small California-based bike builders.

Eric Kopicki

Eric was the team’s digital designer. Chris Reardon showed Kopicki sketches for the UI and Erick transformed them into full-fledged designs for the prototype. Erick realized that design was key to make Divvy feel consistent with Target’s existing digital presence, and strove to make the look and feel of Divvy match Target’s existing app from color scheme to textures.

Because of the need to rapidly prototype for the short timeframe of the Accelerator contest, he and Reardon shunned formal prototyping tools opting instead for old-fashioned pen and paper for most of the designs. In the end, Kopicki put together the polished designs in Photoshop. One of his major tasks was to figure out how to visually represent different elements within Divvy to intuitively distinguish different lists from each other. He likes to get away from the digital creative world by sketching and painting watercolors.

Juuso Myllyrinne

Myllyrinne was the product strategist for Divvy. He did a lot of research into the retail world to see what’s out there already and what’s succeeding or falling flat and why. He also took a long hard look at Target to try to discern their internal strategy and what kind of product would be appealing to them.

This research formed the basis of the creative ideation process lead by Chris Reardon. One of Myllyrinne’s main insights was that Target would be attracted to an app that creates more shopping opportunities for their customers. This was realized in Divvy, which lets you leverage your real-world social network so that any time one person shops at Target they’re not shopping only for themselves--essentially extending the in-store shopping experience to people located physically outside the store. Myllyrinne is a veteran of the digital and mobile world, but prior to this project didn’t have much experience in retail.

He did, however, know a lot about what didn’t work in mobile--he was the head of planning for Nokia’s N-series, their so-called iPhone killers. One thing he was quick to emphasize was that a phone app can’t just be a website squeezed into a smaller screen. It has to have a more fine-tuned purpose. Myllyrinne likes to go jogging but finds New York summer heat too oppressive for that activity (at least at this moment in July 2013--fellow New Yorkers, you know what we mean). Instead he’s been spending his time rounding out his skillset by dabbling in learning how to program in jQuery.

James Skidmore

Skidmore was Divvy’s producer and product manager. Although he’s only been in advertising for four years, he started selling ads in college and clearly has a knack for it. On Team Pilot had the difficult task of wrangling “seven really busy guys” outside of their normal work hours to build something that felt an awful lot like work. Skidmore also contributed heavily to research into the retail sector, which lead to the concept of “social shopping” that eventually became Divvy.

Skidmore wanted to build an app that could be used with only minimal, if any, instruction. He pushed the team to fight their impulse to add more features into Divvy as development progressed. Instead, Skidmore argued that a solid app prototype should do one thing and do it really well. With that in mind, the team polished the core Divvy user journey into something that they could really show off to Target.

Steve White

White was Divvy’s account director and the only team member not based in New York. His job was to educate Team Pilot’s more technically minded members on the language of marketing and advertising so that they could really sell Target on the concept during the finalist stage. He pegged his own value to the team as being the guy who’s “always asking really hard questions.”

At one point White completely reversed course on Divvy, telling the rest of the team that it was a terrible idea. Rather than getting everyone down and pushing them to scrap the project, he instead turned it into an opportunity to strengthen their concept. He made everyone re-pitch the idea and convince him that it was worth pressing on. The result was a more refined and purposeful prototype.

With a background in industrial engineering, White’s career has gone through many different iterations within the advertising world, from offset printing to being a partner in a digital agency from 1997 to 2003. Like Chris Kief, he’s also an outdoor enthusiast, but is more of a hiker than a biker. What interests him most? Where digital financial transactions are going. His hunch is that, in the future of retail, how we transact around consumer items will be more important than the actual outlet where the consumer is buying.

Charlton Roberts

Roberts is a software developer and is one of the team members who presented to Target in Minneapolis. Recruited by Team Pilot fresh out of college, it was his superlative mobile web development skills (and the rest of the team’s dearth of native iOS app dev experience) that influenced Reardon and Kief’s decision to build Divvy as an HTML5 web app.

Under the hood, Roberts used the real-time web frameworks Node.js and Socket.io for concurrency and having multiple users view and interact with data in real-time. His first experience working with Socket.io, but he began working with Node.js about six months ago--even though he spends most of his time doing front-end development. These days, that means lots of experience with Javascript, so the fit with--Node.js--which runs Javascript on the server side as well as the front end--was a natural adjacency. Roberts said he focused his efforts on building web-based experience that would feel the same as a native app. He originally wanted to be an actor and went to college for acting, but added to his theater major another degree in computer science.


A Final Note Of Thanks To Our Entrants

When we embarked on the Target and Co.Labs retail accelerator we had no idea what to expect. We had put out a challenge to our newly minted readership, our site only weeks old, asking them to meet or exceed the abilities of a major corporation with renowned design cred and an obviously capable team of iOS and web developers. What we saw in the 76 completed entries was a remarkable diversity of strategic thinking, impeccable design, programmatic cleverness and--above all--originality. We continue to be humbled by our readers, and we thank you for sharing your ideas, sweat, pixels and code with us.--The Editors

With additional reporting by Jay Cassano.

This Simple Toy Shows Why Girls Hate Engineering

$
0
0

Growing up in Ireland my three siblings and I had a favorite game; We called it James Bond. One of us would play the coveted role of secret agent, and the remaining siblings tried to stop them from snatching some top secret papers.

The twist? We’re all sisters--not a James in the bunch.

As children, nobody ever told us that it was strange for four girls to impersonate James Bond plots. Yet we girls do get the message early and often that engineering is not something for us. The CEO of website builder Moonfruit, Wendy Tan White, recently described in the Guardian how she speaks at schools about careers in technology:

"Raise your hand if you want to work in technology," I ask students. Predictably, but sadly, no hands go up. But when I ask girls to raise their hands if they like Facebook, every arm in the room reaches for the sky. The "geeky" label is still attached to technology in schools, so it's little wonder that students can be indifferent to the subject: it's not presented in a way that's appealing.There needs to be a greater focus on showing what technology allows you to do: cross geographical boundaries; make stuff; unleash your creative side; talk to friends; and share your latest musical creation.

That brings us back to Goldie Blox, a construction kit for girls from the age of six up. CEO Debbie Sterling is herself a Stanford engineering graduate, and after talking to young girls about the toys they love most, she came to a realization: Girls love to read because they love stories. My sisters and I were all voracious readers. “Most construction and engineering kits, which are touted as ‘technical and numerical toys,’ don’t include the storytelling that appeals to many girls,” reports Forbes.

So Sterling designed a kit to be used in conjunction with a story book starring a girl inventor called Goldie who builds machines in order to solve problems, in the first book a spinning machine to help her dog chase his tail.

With the Spinning Machine, Sterling introduces girls to the idea of a belt drive and the concept of tension by using a plastic pegboard, spools and ribbon to teach them how to turn one and then multiple wheels as part of a story involving Goldie’s dog Nacho and several other characters.

GoldieBlox reached its $150,000 funding target in the first four days of a Kickstarter campaign last year (It eventually raised $285,811) and Toys ‘R’ Us will stock the $29.99 “GoldieBlox and the Spinning Machine” in more than 600 stores.

This is a generalization of course--but girls are often more interested in machines and technical systems when they are placed in a larger context, where there’s a problem to be solved or an obvious benefit to society. It’s no coincidence that women study medicine in much higher numbers than engineering, even though both tracks are technical; It’s obvious that doctors help people.

Girls don’t just want to have fun--they want to know why.


Previous Updates


Why Aren’t All Executives Female?

June 25, 2013

Last month we took a statistical look at how job titles break down by gender. This month we’re looking at why women are not represented at the highest levels of their work sectors. (Read back through our previous updates below if you need to get caught up.)

A study published in the May issue of the Personality and Social Psychology Bulletin may help. It seems to suggest that women don’t take as much credit for their work as their male counterparts, undervaluing their contributions to a project when working with men. From the article’s abstract:

Women gave more credit to their male teammates and took less credit themselves unless their role in bringing about the performance outcome was irrefutably clear (Studies 1 and 2), or they were given explicit information about their likely task competence (Study 4). However, women did not credit themselves less when their teammate was female (Study 3).

The full study is unfortunately behind the ivory tower academic paywall, but Wired U.K. has more details on the study and noted that “teamwork is an essential component to most professional roles, so if women repeatedly undervalue themselves in group situations, in front of coworkers and employers, it could be extremely detrimental to overall job progression.”

This study offers a strong, plausible reason for why women are not as likely to be recognized as leaders in their workplaces: You often have to speak up for your accomplishments in order to advance in your career.

Another bit of research put out last week in the Journal of Evolutionary Biology reveals that female scientists (evolutionary biologists, in the case of this study) don’t present their work at conferences as much as their male colleagues. Apparently, women are underrepresented even in relationship to the gender gap that already exists in science fields. In other words, the percentage of female conference presenters is even lower than the percentage of female scientists.

Apparently, one of the main causes of this underrepresentation was that women turned down conference speaking invitations at nearly twice the rate (50 percent) as men (26 percent). One of the study’s main authors, Dr. Hannah Dugdale, elaborated on the implications of the study's findings:

“It’s important that we understand why this is happening and what we can do to address it--high-quality science by women has low exposure at the international level, and this is constraining evolutionary biology from reaching its full potential. We’re currently investigating the reasons behind this lower acceptance rate--it could relate to child-care requirements, lower perception of scientific ability, being uncomfortable with self-promotion--there are many potential contributing factors.”

It could also be related to the social psychology study above: If women don’t feel as confident in their accomplishments, then they may feel underqualified to speak at international conferences.

Obviously neither of these studies look directly at gender dynamics in the software development space. But looking at both of these studies, it seems like some aspects might apply to software while others might not as much. For instance, it’s definitely true that women coders are underrepresented in conference keynotes. And the observation of Dugdale’s coauthor, Dr. Julia Schroeder, that “[f]ewer women in top positions mean fewer female role models for students who aspire to be scientists” certainly rings true in the software world as well.

On the other hand, depending on what type of developer someone is, they might work on their own a large part of the time, possibly even freelancing from home. In that case, the social psychology of attributing success to male colleagues isn’t as relevant. Of course, many developers work in corporate office jobs where that dynamic could very much still be at play.

In other news, the fact that half of NASA’s eight newest astronaut trainees are women, selected from a pool of over 6,100 candidates, is a good sign. It shows that some progress is being made in STEM fields more generally, especially considering the fact that until now only 10.7% of the people who have been in space are women. With NASA astronauts being the elite of their fields, not to mention role models for every third grader in the country, having more women in space certainly bodes well for the prospect of more role models for women interested in STEM careers.



Minding The Gap: How Your Company Can Woo Female Coders

The software industry has a gender problem. Men far outnumber women, and while most of those men like (dare we say delight in?) having women around the office, the cool-bro rock star nerd culture makes it harder to attract, hire, retain and--most important--listen to women engineers. We'll be tracking successes, conflicts, and visionaries in this vein, and narrate as the status quo changes. We won't stop tracking this story until there are as many women working in software as men.



Why Don't Women In Tech Speak Up?

We’re not the only journalists tracking women’s roles in technology. Laura Sydell, a longtime technology reporter for NPR, covers the intersection of technology and culture, and we caught her story a few weeks ago about the changing lives of female programmers. We asked her to give us the behind-the-scenes scoop on her recent piece profiling prominent developer Sarah Allen, who led the team that created Flash video and now runs a mobile app design firm. Sydell has seen the reality of ingrained sexism and thinks that building momentum is the only way to undo industry habits.

“My take is that it’s about visibility,” Sydell says. “I mean who do you hear about in the news? Who do you see in the news? Twenty percent of programmers are women—that’s a significant number,” Sydell says. But where is the coverage?

One obstacle is that women in tech are sometimes reluctant to talk about sexism (“like it’s a disease they might catch” says Sydell.) She speculates that pointing out a gender disparity at their jobs may not feel like it will ultimately benefit their personal situation. “This doesn’t mean they don’t experience sexism,” Sydell says. “They just want to fit in and they’re working hard to get ahead.”

If her sources are mum about office sexism, Sydell says, they’re even less open about the flaws they see in hiring practices. “I have had some off the record conversations where people are like, ‘well I’m afraid to hire a woman if she’s around childbearing age because we can’t afford for somebody in a startup to take maternity leave.’ But nobody says, ‘I don’t want to hire a man of childbearing age.’”

Some Invisible Factors At Play

It makes sense that one obstacle to women’s proliferation has stemmed from a lack of computer science exposure in childhood, which can lead women to feel like they are at an insurmountable disadvantage once they start college. Expanding curriculum options and entry-level college courses, efforts being tested at schools like Harvey Mudd in California, may be one solution for leveling the playing field.

“You know unfortunately my take is that a lot of people who get into computers and programming start before college,” Sydell says, “which often does turn out to be young guys and so the women end up feeling intimidated.”

And it seems like computer science and engineering may currently be taught in a way that caters to how men think and conceptualize problems. “I remember people saying that for some reason guys are much more willing to work in the abstract for longer,” Sydell says. “I don’t know why this is, but women like to see pretty quickly that something they’re building is having an effect.”

This perspective could ultimately be a strength that draws women to coding, though, if other barriers are addressed. “It’s not that they can’t do the abstract,” Sydell says, “but once they see that programming can have this immediate effect they get more interested in it.”

Getting Private Views Out There

While reporting for her recent piece, Sydell attended a 25-person mentorship event with Sarah Allen for young entrepreneurs. After the event, the only three women in attendance came over to Allen and started chatting. “None of them talked about discrimination really,” Sydell says, but “they did talk about how they sometimes felt isolated. They all mentioned that in school they sought out a female colleague for support.” Yet even this small and understandable measure, they feared, could have unintended consequences. “They also debated whether it was possible to do too much networking with other women,” explains Sydell. “The problem is that the men have the larger networks and so you don’t want to limit your connections.”

Sydell has seen progress as an increasing number of hard working and qualified women enter tech, but she has also concluded that only a sustained, concerted effort will continue to draw women into the field. “I think one of the most important things that Sarah Allen said is find an industry where there isn’t sexism. If you get up to the higher echelons of anything the world is sexist. And the more money that’s involved, the more it seems to be guys. And what’s up with that?”

What It Feels Like To Be A Woman Programmer

We don’t hear from the women who are actually working in software often enough. Ellen Ullman, a former software engineer, recently penned an opinion piece in the New York Times called “How to be a ‘Woman Programmer’.” It’s an important firsthand account of what it actually feels like to be a woman working in technology--invaluable for men like me who will never subjectively know that actual experience.

I looked around and wondered, “Where are all the other women?” We women found ourselves nearly alone, outsiders in a culture that was sometimes boyishly puerile, sometimes rigorously hierarchical, occasionally friendly and welcoming. This strange illness meanwhile left the female survivors with an odd glow that made them too visible, scrutinized too closely, held to higher standards. It placed upon them the terrible burden of being not only good but the best.

Other parts of her article resonate with what we recently found in the gender gap by job title breakdown from Bright Labs: namely, that the more technical a job within the tech sector is, the wider the gender gap tends to be.

We get stalled at marketing and customer support, writing scripts for Web pages. Yet coding, looking into the algorithmic depths, getting close to the machine, is the driver of technology; and technology, in turn, is driving fundamental changes in personal, social and political life.

But perhaps the biggest takeaway for me and other male allies to women working in software, is this: It’s important to talk about the challenges facing women in software, but it’s just as important to recognize the achievements of women engineers as programmers, not merely as trailblazers. Ullman writes:

But none of it [experience as a programmer] qualified me as extraordinary in the great programmer scheme of things. What seems to have distinguished me is the fact that I was a “woman programmer.” The questions I am often asked about my career tend to concentrate not on how one learns to code but how a woman does.


Hard Numbers: The Actual Percentages Of Women In Tech Roles

Bright Labs has released new research to Co.Labs about which roles are most male-dominated, and some patterns begin to emerge.

This is one of the most complete snapshots of the gender gap in technology employment we’ve seen so far. Co.Labs readers have been eating up the slices of data on the gender gap we’ve been dishing out. It’s clear that "women in software" is a topic that begs for more coverage. So we got in touch with our friends at Bright Labs to provide us with some previously unreleased numbers on what the actual gender breakdown is by job title.

The first thing to keep in mind with these numbers is that job titles can be pretty arbitrary and may not actually reflect the kind of work being done by any given individual. With that said, there are a couple of interesting trends worth highlighting here. But first, the stats:

Let’s break down these numbers. First of all, it looks like tech support positions tend to bubble up and be the most of a dudefest: IT support, computer technician, network technician, and desktop support technician are all more than 90% male. Does this mean corporate suits feel more comfortable talking to a male IT geek about their problems with Outlook than a female IT worker? Or perhaps the IT help desk is a particularly unfriendly place for women to integrate? Either way, it’s important to note that these numbers are domestic; it would be interesting to see the gender breakdown in outsourced IT, or internationally.

On the other hand, “analyst” positions like data analyst, help desk analyst, and senior programmer analyst tend to be the least--though are still significantly--male-dominated, floating between 53.8% and 75% male. With these numbers, we’re starting to see a clearer picture now: The less a job deals with the back end of a development environment or network infrastructure, the more open (for whatever reason) it is to women working in that role.

One final interesting data point to note is that senior software developers are 89.5% male, while plain old software developers are only (“only”) 78.1% male.

What’s the gender breakdown for these positions like in your company? What do you make of these numbers? Do you have your own research you’d like to share? Tweet @jcassano and @FastCoLabs with your facts, insights, and opinions.


Why The Developing World Needs Women To Be Online

Want to improve economic conditions in developing countries? As usual, the best approach is to focus on women.

If women can’t get online, then there’s no chance they’ll get a job in software. Here at Co.Labs we’ve been on a number-crunching bit when it comes to women in software. So far we’ve taken a look at two important slices of data: perspectives on obstacles to getting more women in tech and how new tech jobs are mostly going to men. Now we take a look at the third piece of the puzzle: the gender gap in accessing the Internet.

Earlier this year, Intel released a massive study crammed full of useful research. It’s a lot to digest, so we’ve pulled out some of the most provocative trends.

The report focuses on women’s access to the World Wide Web, particularly in developing countries. One consistent but unsurprising pattern is that the less economically well-off a country or region is, the wider the digital divide between women and men tends to be.

On average across the developing world, nearly 25 percent fewer women than men have access to the Internet, and the gender gap soars to nearly 45 percent in regions like sub-Saharan Africa. Even in rapidly growing economies, the gap is enormous. Nearly 35 percent fewer women than men in South Asia, the Middle East and North Africa have Internet access, and nearly 30 percent in parts of Europe and across Central Asia. In most higher-income countries, women’s Internet access only minimally lags that of men’s, and in countries such as France and the United States, women's access, in fact, exceeds men's.

Intel stresses that this is bad for business for two main reasons: first, the loss of revenue from online transactions and second, the reduction in economic opportunity for women who might use the Internet to find work. According to the report, there will organically be 450 million new women online by 2016--the report’s main recommendation is to boost this number by another 150 million in that time period. This will reportedly open up market opportunities of at least $50 billion.

Intel’s researchers also home in on the fact that 30% of women with reliable Internet access have used it to search for jobs or otherwise improve their economic standing. A lot of efforts to overcome the digital divide work narrowly on just getting more people online. That’s great, to be sure. But in a section called “not all access is equally empowering,” the authors write:

The Internet can convey numerous benefits to women, but unlocking these benefits depends on how deeply women engage online. “Fully engaging” on the Internet requires feeling conversant--knowing what to look for, how to search, and how to leverage networks, knowledge and services--as well having fast, unrestricted, reliable access.

Our study showed that the longer a woman had been using the Internet, the more likely she was to report concrete benefits such as earning additional income, applying for jobs, and helping with her studies. Users with multiple platforms to access the Internet were also more likely to report these concrete benefits than users of either computers or mobiles only.

The report also features an interesting breakdown of the different demographic groups and how they are likely to access the Internet: computer-only, mobile-only, or multi-platform. In general, mobile-only users are younger women who use the Internet daily, but are unlikely to use it to apply for a job. Computer-only users (laptop or desktop) tended to be middle-income female homemakers and often use the Internet for education and study. Multi-platform users, naturally, tended to be wealthier women who use the Internet daily and are likely to use it for education and shopping.

How one accesses the Internet also affects one’s attitudes about it. Women who access the Internet through both mobile and computers, for instance, hold the strongest belief that Internet access is a fundamental human right. This suggests that there’s a positive feedback loop at work: The more regularly women access the web, the more they begin to see it as an integral piece of social fabric--something that everyone needs to be a part of.

This is good to know because if we’re serious about overcoming the gender gap in software, the first job needs to be getting more women around the world online. Computer programming is a skill that any individual, with enough access, can learn on their own to improve their economic standing. This is true even--and perhaps especially--in the developing world. If the next wave of new computer programmers is going to come from outside developed countries, then it’s imperative to get more women online now so that they can enter the job market on equal footing.


Is The Tech Gender Gap Widening?

Despite all of the increased attention the gender gap is receiving, new data suggests that it might be widening rather than shrinking. Spoiler alert: We need more women engineers.

The data doesn’t lie. For all the talk about tech becoming a less male-dominated space, women are still a vast minority in the industry. In fact, recent data from Bright.com suggests that the gender gap is widening--at least momentarily.

We recently covered a survey by the freelancing site Elance, an online marketplace for self-employment. That survey mostly focused on the attitudes of men and women freelancers towards how tech can become more open to women. A new survey from job search platform Bright tackles the nitty-gritty details of who’s actually snagging new tech jobs.

The number of jobs in the technology sector has grown a substantial 3.8% nationwide in just the first four months of 2013 (compared to the last four months of 2013). In April 2013, some of the known tech geographies where among the fastest growing regions in tech, including San Jose, Austin, San Francisco, Boston and Seattle, however other areas less well-known for their tech jobs also displayed strong growth, including Oklahoma City, Kansas City, Tucson, and Indianapolis.

Nothing new there. We know that tech is one of the country’s fastest-growing sectors, and that it tends to grow the most in traditional geographic hotbeds like the Bay Area. It is interesting to see that New York’s much-touted “Silicon Alley” didn’t make the growth cut while Kansas City continues to explode under the influence of Google Fiber.

The real question we’re interested in is, who are companies hiring to fill all these new jobs? The report tackled this question head on:

These jobs are trending to favor male job seekers. While the tech sector is predominantly male overall, an estimated 71% male, the titles displaying the largest increase in available jobs have also trended towards male-dominated roles, including Systems Administrators (89.7% male) and Senior Software Engineers (77.1% male).

Let’s take a moment to unpack these numbers. We know that men account for about three out of every four people working in tech right now. On top of that, the job areas that grew the most in the first four months of 2013 tended to favor men by an even larger percentage than the industry as a whole. If this trend keeps up, the gender gap may end up widening rather than shrinking, despite heightened awareness of the issue.

To be clear, we know the problem is probably even worse than it seems because a lot of the women who are counted as working in the tech sector often work in PR, HR, or marketing. The answer shouldn’t be to just keep hiring women in those roles. According to Bright Labs, the most in-demand job titles in April 2013 were all technical positions. Companies need to hire women engineers if they want the gender gap to shrink.


The war for engineering talent is so hot that companies are trying everything to lure top candidates. Sometimes, these incredible bonus packages are a great way of finding talent who will fit in with the team. Other times, the tactics become so gimmicky and specific that they’re almost guaranteed to screen out a diverse set of candidates.

Take, for example, Saatchi and Saatchi Tel Aviv’s recent decision to screen candidates for a software engineering position by conducting interviews inside Diablo III.

The idea to test skills like teamwork and thinking under pressure using a video game is worth exploring. The U.S. Army, for example, uses video games to help train soldiers how to recognize friendly people from insurgents disguised as civilians in Iraq and Afghanistan.

Choosing a specific game with no relation to the job other than the CEO’s preference, however, was probably not the way to go if the company wanted to have any shot at hiring a woman. Technology is already a male-dominated field, especially in Israel. Moreover, Diablo III’s playerbase is 69% male, meaning that by choosing the game you’ve already narrowed down the pool to an incredibly homogenous group.

Even if you did find a qualified woman gamer-developer, there’s another problem with conducting in-game interviews that most men would never even think about. Due to a combination of their relative scarcity and the anonymous nature of online gaming, women who play Internet games and identify themselves as such face a constant barrage of sexist trash-talking from their male counterparts. The problem is so severe that some women have created entire sites to document the misogyny they face playing online on a regular basis. Given the stigma attached to female gamers, it wouldn’t be shocking if women didn’t want to participate in an interview where their potential boss was giving them orders over the same system so many jerks use to berate them.

Although it’s tempting to search for new and inventive ways to find candidates, companies have to be careful not to automatically weed out too many qualified candidates just by the interview criteria. There’s a fine line between offering perks that help you find someone who will fit well on the team and searching for such specific traits that you’re almost guaranteed to find someone exactly like yourself. Unfortunately, Saatchi crossed that line.

This update was contributed by Gabe Stein.


According to a survey conducted by Elance, the greatest deterrent to getting more women in technology fields is a lack of female role models. Elance is a popular platform for freelancers, so survey respondents come primarily from that share of the tech marketplace. And some questions are specific to working from home. Still, it’s probably a safe assumption that a lot of the same trends apply for women working in technology fields whether remotely or in-office.

It’s definitely worth reading through the results of this (fairly short) survey. Here are three stats we’ve pulled out for you:

  • 66 percent say that for women to be successful in tech will require equal pay for women and men with same skillsets
  • Only 22 percent of respondents believe technology needs to be made more “glamorous” or “cool” in order to appeal to women
  • 80 percent are “optimistic” or “extremely optimistic” about the future of women in technology

Female readers: Do these figures resonate with you? The most interesting stat here is the one about unequal pay, because it demonstrates that the women responding to this survey expect to get paid less off the bat, even in more progressive companies. Also--it’s telling that nearly a quarter of respondents don’t find technology “cool enough” to compete with careers in more feminine organizations. Help us unpack what these stats mean by sharing your take on Twitter.


A recent NPR segment, “Blazing The Trail For Female Programmers,” profiled the lead developer of Flash video, Sarah Allen. It’s part of an ongoing NPR series called “The Changing Lives of Women.” NPR talked with Allen about what it means to work in a field where only 20 percent of her peers are women.

Today Allen is CEO of mobile design & development outfit Blazing Cloud. In addition to Blazing Cloud’s volume of work speaking for itself, Allen is also getting business from startups who value their genuine emphasis on diversity, as opposed to just hiring women as “window dressing.”

Allen reflects on the decades she spent being the only woman on a development team and how things still haven’t changed too much. She tells a story about being being one of six women at a 200 person Ruby on Rails conference a few years back. Coming out of that experience Allen started RailsBridge, an organization aiming to increase diversity in tech through free workshops for “women and their friends.”

She also emphatically makes the point that the issue is a lack of supporting for women who already want to get into tech:

We've really proven that demand is not a problem. Every single workshop we've ever held has had a waiting list.

There are lots of other interesting moments in this quick 8-minute segment: According to NPR, the proportion of women studying computer science has actually decreased since the mid-20th century (that’s ponderous stat #4, for those counting). While you’re listening, also check out the April 29th broadcast of NPR’s All Things Considered for a complementary segment about Harvey Mudd’s efforts to get more women in computer science degree programs.


Should all-male software companies be on some kind of wall of shame? Here at Co.Labs we’ve celebrated the success of specific companies that have actively sought to increase diversity within the programming community. But what about those companies with particularly egregious records? Is it really so bad to have an organization that’s all one sex?

The creators of a blog called 100% Men think so, which is why they’ve put the spotlight on IFTTT, Posterous, Autonomy (an HP subsidiary), and the dating site Couple.me--all of which boast about as much gender diversity as a Freemasons meeting. (In fairness, Posterous was only 100 percent men as of 2011, and the company is being shuttered anyway, but to their credit they now they have two women on staff: one engineer and the office manager.)

It seems like a total no-brainer for a dating product to have a gender mix on the design team, doesn’t it? Perhaps that’s why no one’s heard of Couple.me. Read previous updates to this story below.


Rails Girls Summer of Code (RGSoC) was started by Berlin Rails Girls organizers to help Rails Girls get into open source, a focus that distinguishes it from Google’s original Summer of Code. Ruby on Rails is a full-stack development language that you can learn more about here.

Just as in Google Summer of Code and Ruby Summer of Code, students will be paid so they're free to work on Open Source projects for a few months. Unlike those programs, the Rails Girls Summer of Code is about helping students to further expand their knowledge and skills by contributing to a great Open Source project (rather than producing highly sophisticated code).

Targeting women in tech is great, and helping them become active, productive members of the vibrant Rails and open source communities makes this program particularly exciting. To get involved as a student or mentor, write to summer-of-code@railsgirls.com or catch RGSoC on Twitter.


Stacey Mulcahy wrote a letter to her 8 year old niece and posted it online. Why does that matter? Well Mulcahy—aka @bitchwhocodes—is a developer who has personally come up against the shortcomings of the tech community when it comes to gender equality. Inspired by her 8-year-old niece’s decision to become a game developer when she grows up, Mulcahy wrote this letter“to a future woman in tech.” It’s full of hopes for her niece and for the developer world in general:

I hope that when you attend a meeting that is mostly male, that you never get asked why you are not taking meeting notes. I hope you say "fuck this" more than "it's okay".

...I hope that skill will always be held in higher esteem than your gender - if you had no skill, you would not be part of the discussion, and your gender is simply a modifier.

...I hope that no one ever tells you to "deal with it", "relax", or "ease up" because you refuse to laugh at something that simply is not funny.

...I hope that you attend conferences and find yourself complaining about long lines for the bathroom.

A lot of the lines in this letter will be familiar with anyone who follows even the slightest the grievances of women in tech. But it’s a powerfully original way of framing the issue, by focusing on the positive vision of the kind of developer community Mulcahy would like to be a part of, rather than just railing against the shortcomings of the one that currently exists—it's empowering. (Hat tip to @NGA_Anita.)


We can talk about the gender divide in tech all day, but it’s also important to celebrate the achievements of women in software. In fact, if it weren’t for the work of one woman, Ada Lovelace, computers as we think of them today might not exist. Lovelace worked closely with Charles Babbage on his early mechanical computer designs. Although today Babbage is considered the "father of computing," it was actually Lovelace who is believed to have written the first computer program. She also imagined computers as more than just calculating machines, influencing the thought of several pioneers in modern computing.

Stevens Institute of Technology is holding a conference celebrating the achievements and legacies of Ada Lovelace on October 18, 2013. Proposals for papers are due May 14. From the institute:

An interdisciplinary conference celebrating the achievements and legacies of the poet Lord Byron’s only known legitimate child, Ada Byron King, Countess of Lovelace (1815-1852), will take place at Stevens Institute of Technology (Hoboken, New Jersey) on 18 October 2013. This conference will coincide with the week celebrating Ada Lovelace Day, a global event for women in Science, Technology, Engineering, and Mathematics (STEM). All aspects of the achievements and legacies of Ada Lovelace will be considered, including but not limited to:

  • Lovelace as Translator and/or Collaborator
  • Technology in the Long Nineteenth Century
  • Women in Computing: Past/Present/Future
  • Women in STEM
  • Ada Lovelace and her Circle

So if you care about women in software, then, now, or in the future go ahead and submit a paper. It’s a good way to honor the legacy of the world’s first coder -- a woman.


Previous Updates To This Story

For those who missed it, "donglegate," as Wired dubbed it, is the latest blowup after a display of sexism in the coding community. Although some people, including women in tech, took issue with the way Adria Richards handled the situation, Wired's Alice Marwick puts it in context very bleakly:

Regardless of the nuances of the incident, the fact remains that Richards faced a gargantuan backlash that included death threats, rape threats, a flood of racist and sexually violent speech, a DDOS attack on her employer--and a photoshopped picture of a naked, bound, decapitated woman. The use of mob justice to punish women who advocate feminist ideals is nothing new, but why does this happen so regularly when women criticize the tech industry? Just stating that the tech industry has a sexism problem--something that's supported by reams of scholarly evidence--riles up the trolls.

Jezebel also chimed in, pointing out the how these kinds of jokes are possible and seem normal because of how much of a dudefest tech is. It seems particularly egregious that these guys made these jokes right when the speaker was talking about bridging the gender gap in tech.

Richards was distracted, mid-seminar, by a couple of tech bros sitting behind her making some shitty sexual puns about "dongles" and "forking." (She blogged about the full chronology of events here.) Richards did not enjoy the jokes. She especially did not enjoy the disrespect shown to the speaker, who happened to be specifically, at that moment, addressing programs designed to make the tech community more welcoming to women. Meanwhile, in the audience--Richards's photos reveal a sea of men--a couple of dudes felt 100% comfortable cracking the kind of crude jokes that people generally reserve for their home turf. And that's because, to a lot of dudes, tech is a space owned by men.


Bruce Byfield, who has written extensively on all things free & open source, gives an overview of sexism in the FOSS community. As a subset of the broader development community, FOSS has a lot of great things going for it because of its transparency and emphasis on collaboration. Unfortunately it still shares many of the same problems when it comes to gender. Byfield takes an informative look at initiatives that are trying to fix the gender imbalance, like the Geek Feminism Wiki, Ada Initiative, and Ubuntu Code of Conduct.

Carla Schroder credits Ubuntu for its all-purpose code of conduct, which she calls "a radical departure from the dominant 'freedom to be a jerk' ethos that prevailed before." As a result, Schroder adds, "Ubuntu has also attracted large numbers of contributors and users from more diverse walks of life than other distros.

However, in the last two years, FOSS feminism has paid special attention to anti-harassment policies for conferences. Most of this work has been developed by the Ada Initiative, an offshoot of the Geek Feminism Wiki, which has developed templates for policies that can be used either unmodified or as starting points for discussion.

The rationale offered for this emphasis is that anti-harassment policies can be a starting point for changing other aspects of the community.

All in all a thorough and well-reasoned piece worth a read (even if the pagination on datamation is ridiculously annoying).


Ashe Dryden, a Drupal and Rails developer, did the software community a huge favor by starting to answer the question "How can I help tech be less sexist?" She gives concrete, applicable steps that people can take to make conferences more diverse, like:

Anonymize and remove gendered pronouns from abstracts/bios before handing the data over to your proposal review committee. Someone who is outside of your proposal reviewing committee should be assigned this task.

Pretty simple, but makes a huge difference. Dryden's post is full of tidbits like that. It also includes a pretty thorough list of different marginalized populations, going far beyond gender diversity to include, for example, physical disability and economic status. But women in tech is still the focus of what Dryden is writing about.



The headline of this article in Forbes elides individual (and organizational) responsibility by saying that women are "accidentally" excluded from tech.
That said, it still makes a great point that tools like Codecademy are democratizing technology and thereby removing a lot of the traditional barriers to women, like it being hard to find mentorship in a boy's club.


Dani Landers, a transgender woman game developer, gives an account of how her identity informs her game design decisions in Bloom, a game currently vying for funding on Kickstarter.

It's no secret that the games industry, by and large, lacks diversity. In this case, that is gender diversity. This is actually a huge shame as it limits the stories and points of views different types of people bring to the collective table of gaming.

Landers contrasts the way she handles representation of female characters and motherhood with the way major video game studios do, which is pretty obvious in her artwork.

The differences in the way I create concept art and models is pretty self-explanatory. Basically, notice how the female characters aren't half naked with giant breasts? Yea, this is a pretty easy one to be aware of...I'm kind of surprised this is even "different" to treat them with that level of respect.

The influential gaming site Penny Arcade picked up the story, with a really interesting take on how gaming can be a safe haven for certain marginalized populations.

Games themselves may offer a safe place for transgender, genderqueer, questioning, or other LGBT community individuals, but the gaming community has been less receptive. When Landers was promoting her game in one gaming community forum, users hijacked the thread and began posting "tranny porn," telling Landers she should find new work in the adult film industry.

Articles and features on gaming sites that bring up gender representation of any kind, be it transgender or otherwise, is typically met with the 'Why is this important?' 'How is this relevant to video games?' style responses. It should be apparent by now that games can be far more than just entertainment to some individuals. To some, it's a necessary escape, or a safe haven where the question of "Who am i?" can be safely explored.


So, a pretty prime example of women being reduced to sexual objects in the technology world is this article on Complex, "The 40 Hottest Women in Tech". At first glance the article is a weird mix of acknowledging sexism in tech followed by outright sexism from a publication covering tech. It begins:

Technology has been a boy's club for most of its existence. Just another unfortunate repercussion of the patriarchy. But that's been slowly changing, and over the last decade we've seen a number of wonderful, intelligent, and cunning women make inspiring strides in the field of technology. Through web development, social media, space exploration, and video game design, we see the world of tech becoming a more equal playing field. Here are 40 women we admire doing work in the field of innovation.

Followed by a slideshow of scantily clad women or typical "hot" women, including noting that one of them was a Playboy playmate. Commenters on the piece were justifiably outraged, writing:

How can you open with "sure, tech hasn't been friendly to women for ages, but it's better now!" and then proceed to objectify the women who have fought through this bullshit? Do you not see that you're only perpetuating the toxic culture?

And:

Funny you would mention patriarchy in your opening paragraph, then proceed to perpetuate it by subjecting all of the hardworking and talented women in this field to, effectively, a 'hot-or-not' list. Shameful.

It turns out, though, that the author of this piece didn't want it turn out that way:

I was assigned to write the 50 Hottest Women in Tech by Complex and it really bummed me out, because the idea of perpetrating the same old gender divisions in an area like tech - which has predominantly been a boy's club throughout history - seemed like kind of a messed up thing to do. It represents the most banal form of internet content that exists. But it's hard to say no to a paycheck.

So what I tried to do was see if it was possible to make something called "The 50 Hottest Women in Tech" earnest and empowering and an actual good thing. I pretty much only included normal looking women, who were involved in something really crucial or exciting in the tech space. I made no allusions to their looks in the blurbs, and ended up with simply a long list of very exciting women.

Of course when the piece actually ran, I discovered that over half of the women I had included were replaced with people like Morgan Webb, complete with the usual lascivious dialogue. Sigh. It's hard to win when you're writing for Complex, but please know that I tried.

That explains why tech-entertainment celebs are mixed in with actual female technology innovators like Gina Trapani and Marissa Mayer. It's the mark of a bad publication that it would not only assign a piece like this in the first place but that it would so drastically alter it after the fact. Fortunately, people in technology fields weren't buying what Complex was selling, as evidenced by this tweet and this tweet:


Stay tuned as coverage continues!


[Image: Flickr user Eliah Hecht]

Viewing all 36575 articles
Browse latest View live




Latest Images