Quantcast
Channel: Co.Labs
Viewing all 36575 articles
Browse latest View live

Finally, A Hackathon That Doesn't Destroy Your Brain And Body

$
0
0

This might be the world's healthiest hackathon. Unlike normal all–day code events, in which teams end up sleep–deprived, over–caffeinated, and engorged on junk food, this event was meant to give developers insight into their own health and behavior as they rushed––not too fast, now––to built apps that help us sleep better.

Co–sponsored by Ace Hotel, the Clinton Foundation, Tumblr, and Jawbone, the codeathon for health drew 25 students from New York City to build apps for helping people sleep better. Each of the six teams that participated were given a Jawbone UP wristband health tracker. The UP bands were used for development purposes but also gave the developers insight into their own behavior patterns and how their lifestyle might be adversely affecting their health goals. Throughout the codeathon, there was an equal emphasis on building apps for health and on developers living healthier themselves.

The idea grew out of a join interest between Tumblr and Ace. Given both's strong base of creatives, the original idea was to hold a gaming–oriented hackathon. "We originally just talked about wanting to do a game jam," says Max Sebela, creative strategist for Tumblr. "We knew that we wanted to develop digital products with one another in this space. It was very broad and we didn't know what it would look like."

Explaining how the sponsors came together to refine the codeathon's focus, he adds "We eventually got brought into conversation with the Clinton Foundation and became kind of enraptured with this idea of health. When we got in touch with them it all came together. That was the missing piece––the narrowing of the idea coming from them."

Why do we care about health, anyway?

The codeathon for health is part of the Clinton Foundation's Health Matters Initiative. The foundation's work has typically focused on international issues, such as global health and economic development. But late last year it launched a new program aimed at reducing preventable chronic disease through behavior change.

"Seventy percent of Americans are dealing with managing a chronic disease," Lexie Komisar, strategic advisor to the Clinton Foundation, tells Fast Co.Labs. "And 75% of U.S. health care spending goes to that management."

The group of unlikely partners settled on sleep as the particular aspect of health they wanted to focus on by examining their collective strengths and interests. In addition to movement tracking, Jawbone's UP wristband unobtrusively records the length and quality of the wearer's sleep. Sleep also seemed to be a natural place where health and hotels intersect.

"Sleep being one of the core tenants of health is why we eventually chose it," Komisar explains. "The fact that 10% of the American population suffers from chronic insomnia and around 60 million Americans suffer from some type of sleeping disorder makes this crucial to address. If we can really provide these tools to help Americans make these behavior changes it can have a huge impact in the way in which Americans live as well as where we spend our money."

It was important to the sponsors that in addition to coding for health that the codeathon itself was a healthy environment to work in. Through yoga and stretch breaks, walks during lunch hour, and healthy food served throughout the conference, the codeathon succeeded in modeling what a health developer work life should look like.

"Once we were talking with the Clinton Foundation, we wanted to make sure that everything that would take place would be something that promoted good health in some capacity," says Sebela. "And from there it sort of designed itself. We realized we needed a certain amount of movement every day and we needed to ensure that we're at least giving participants a mandated rest period where you're working a normal work day with normal work hours that are reasonable and healthy for you without having to think about cranking through code till 3 a.m. existing only on caffeine."

Within the context of creating a healthy environment for developers, the organizers recognized that it would be most effective to focus on small changes that were readily exportable to developers' everyday lives. The goal was to function more like a tutorial for how to code while being healthy, rather than create a weekend spa–like environment that wouldn't have any lasting impact.

"I think one of the things that's been interesting is understanding that not everyone in America is going to wake up, eat kale tomorrow, and run five miles," Ben Sisto of Ace Hotel jests, then elaborating: "We've been interested in showing ways that you can create very small, incremental changes. When we brought in the yoga instructor, they originally wanted to do work with yoga mats and large–scale stretching and we thought that it'd be better to do things that are just at your desk that you can do for five minutes in a normal work day."

"It's a big ask to get people to do organized activity in the context of one these things. But ultimately I think it's been a huge success," Sebela adds.

Coding for health toolkits

Throughout the codeathon it was clear that an incredible amount of synergy existed between these unlikely partners, each of which brought something unique to the table.

Harvard Medical School's Division of Sleep Medicine provided to the codeathon a dataset of a large cross–sectional survey of corporate workers, primarily businesswomen, that assessed their sleep habits and the social factors that might influence those habits. The dataset is normally closed, but through the partnership with the Clinton Foundation, Harvard Medical School agreed to release their data exclusively to the codeathon.

"We really want to be the bridge between academia and the technical application of how this data can impact research and policy," Komisar says. "We're really thankful that Harvard provided that for the first time to our participants."

For it's part, Jawbone provided every participant with an UP, which logged participants sleep and movement throughout the weekend. One of the criteria development teams were judged on was in fact the collective health of their team––in the case of one project that criteria kept them just shy of winning the competition.

The final projects from all six teams also made use of the API for Jawbone's UP wristband. The API is currently closed with plans to open it up later this year, making last weekend's coders the first to experiment with it outside of Jawbone's direct business partners.

"We really focused on creating a two–way API, which is somewhat unique for the health space," says Jeremiah Robison, VP Software at Jawbone. "Most of the APIs are about pulling data out. We wanted to make it a compelling platform for communication because so much of the foundation for our system is that heath is a social activity."

Explaining the philosophy behind Jawbone's soon–to–be–open, two–way API, Robison explains: " We started with a strong philosophy on two vectors. The first is that the user owns their data; users should be able to take their own data and move it from one system to another. That's kind of unique in the API world and it certainly was when we first got into health. Everybody and their cousin wants to own the health data, but in reality a user owns their data. The second principle is that it's going to take more than one company to solve the health problems in the world and so the only possible path in that scenario is to open it up for other people to use."

"The API was very, very simple to use," according to Jamie Roberts of the winning team. "It's a really new API so for it to be laid out with clear directions for OAuth and all the intricacies of the values and even the types of values within the documentation was a pleasant surprise."

The winning apps

Roberts and his teammates Adam Leibsohn and Daniel Finkler developed Fitcoin, a social betting pool for health challenges. Users are able to join teams with friends or strangers and put down money on who will win a specific challenge like meeting certain sleep goals or walking a certain number of steps every day, all logged by the UP.

One of the aspects that contributed to Fitcoin winning was the fact that it had a baked–in distribution method to account for the fact that not everyone in the world is going to own an UP.

"We really wanted to think about how this would go viral," Finkler explains. "And one of the problems with the distribution would be that not everyone currently has a Jawbone UP band. We were thinking about how do we get our friends on Facebook to care about this and leverage the social network to pull people in and get people involved."

The solution was to allow Facebook friends to bet for their friend with an UP who is participating in the competition. (You can also bet against, if you're that kind of friend.) Leibshon adds, "One idea we were discussing was that based on their side–bet on you they get a portion of what you win, so they get a return on their investment on you."

One of the other most compelling ideas and the runner–up in the competition was Git–Sleep. In addition to being an awesome developer–centric pun, it also has a lot of potential for impact. Unlike Fitcoin, which is oriented for a general audience, Git–Sleep and its command–line interface are clearly targeted at developers.

The team behind Git–Sleep was comprised of Sarah Duve, Max Jacobson, and Ruthie Nachmany, all students at The Flatiron School in Manhattan. Git–Sleep is a simple tool that connects with your UP to log your sleep. When a developer tries to commit code to Git it runs a check to see how much sleep you got last night. If you didn't get enough, it will display a prompt along the lines of "You only slept 3 hours last night. Are you sure you want to commit this code?"

"It was partially inspired by those apps that prevent you from calling your ex at 2 a.m. or prevent you from sending a Gmail when you're drunk," Duve reveals. With its focus on developers and its integration with GitHub, Git–Sleep already has a base of users that would likely get behind it very quickly.

"We heard some really encouraging words about finishing the project and that people might actually find it useful," says Jacobson, adding that "the UP band is really popular with developers who are obsessed with quantifying themselves." Nachmany picks up on this, remarking that "developers are really into the quantified–self movement," which may give rise to wider adoption of Git–Sleep. The team has plans to open–source the tool, which is already available as a Ruby gem.

As each team gave their presentation in the Ace Hotel lobby, other guests could engage with the codeathon if they wanted to, but they could just as easily ignore it. The point is that it was readily accessible to them and integrated into their time in the hotel.

Sisto explained why the codeathon was held in the lobby with the presentations mic'd for everyone to hear: "Ultimately we want guests to feel activated when they're staying here. We do a lot of events in the lobby––one night might be a drag show, one night might be a codeathon."

Another noteworthy aspect of the codeathon was the gender breakdown, which came in at 10 women and 15 men. Granted that it's a small sample size, but seeing a 40–60 breakdown without any artificial quotas or targeted outreach is a welcome sight for the tech scene. Perhaps the fact that the group was primarily comprised of students is a sign that there is a demographic shift underway.

Sebela commented on this, noting that it's "one of the joys of reaching out to students in New York because you have institutions like Hacker School and Flatiron School that preach diversity so front and center." He added, "So when you do want to mobilize students you already have this wonderful diverse pool that hopefully comes together on its own."

It's easy to be skeptical of codeathons these days, particularly given the critiques that have been coming out recently. But with healthy food, yoga, a full night's sleep, and a diverse environment, even the most hardened hackathon cynic has to walk away from this codeathon feeling energized about the potential for this space to both do good for the world and be good for developers themselves.


Apple's iPhone Fingerprint Sensor May Use This Unconventional Technology

$
0
0

Fingerprint sensors and the iPhone 5S are two pieces of technology that have been dancing in and out of the headlines for months. Some commenters think the pairing is likely, others say it's silly.

But fresh fuel was thrown on the rumor fire late last week by leaked photos of a purported iPhone 5S front glass cover that seemed to have space in it for new components next to the home button...which is where Apple's own code suggested a user may scan their print on a future iPhone. And then there's news of another fingerprint patent that Apple gained when it bought fingerprint firm Authentec last year, via PatentlyApple.com. Where other fingerprint scanners have proven unreliable in terms of accuracy or spoofability, leading to doubts Apple would adopt the tech, this new system is quite crazily clever. In fact, it's clever enough to image the tiny ridges and furrows of flesh that make up a finger even through a "cellphone case" or an "LCD display cover plate."

Many fingerprint scanners use an optical technique that either images a whole fingertip at once or in slices as you slide your finger over a small sensor gap. These do work, pretty reliably...though it has been noted that variability in the sensor production can seriously affect the final device accuracy, and that a clever enough criminal could spoof the scanner with an image or a reconstruction of a finger. This is basically because to fool an image–sensitive system you only need the right kind of image––remember the latex fingerprints in the film Gattaca? If you doubt this, MythBusters has already proved the point, and you also need to read this fascinatingly odd story of a Brazilian doctor and her "bag full of fake fingers."

But in the Authentec/Apple patent a fingertip is imaged via a different technique: Radiofrequency scanning. Skin and flesh, thanks to the cocktail of chemicals they contain, have their own electrical signature––meaning a human body can in fact block a radio signal of the right frequency, while other frequencies sail right through us more or less unaffected. The sensor in the new patent makes use of this fact by sending out very precise radio signals over a very short range and detecting the signals that have been affected by the bumps and gaps in a human fingertip. Basically the tiny ridges of flesh in a fingerprint affect the electrical signals coming from the sensor array in a measurable way, allowing the device to calculate the position and alignment of all the whorls and loops.

The advantage of this system is that you couldn't fool it with an image of a fingerprint or a latex cast of a fingerprint because the RF signals from the sensor have to interact with a material that has a flesh–like radio response in order to register the print. It's suggested that the sensor can also detect live tissue beyond the simple skin of a fingerprint, which removes the one scary scenario of fingerprint tech: The idea that a really determined thief would also have to "steal" the finger in question. That's unlikely to be a problem with a simple smartphone, but it does have implications for some of the more secure uses of fingerprint tech.

This sort of tech seems very Apple, don't you think? Novel, unexpected, and something of a lateral approach to the problem. It also seems like it would be both more secure and more easy to use than other types of fingerprint scanners. Does this mean Apple is going to use this tech in the iPhone 5S (iPhone 6, or whatever it's going to be called)? Not necessarily. But it is definitely one in the eye for some fingerprint naysayers who dismiss the idea of a fingerprint–sensing iPhone.

[Image: Flickr user Martin Cathrae]

How This Brain Scientist Used Video Game Tech To Break New Ground

$
0
0

You might call University of Pennsylvania researcher Michael Kahana a navigator of the mind. By pushing the limits of technology and research, he finds maps of the world in the brain that no one has seen before. His latest discovery, the human "grid cells" that help us navigate in space, required overcoming a particularly difficult challenge: Finding a way for study participants to activate the areas of the brain responsible for spatial navigation without leaving their hospital beds.

The Grid Of Your Brain

Ten years ago, Kahana was the first to discover human place cells, neurons that fire only when you're in a particular location. The "cognitive map" these cells generate becomes the stage where your brain replays spatial memories, imagines the way to get somewhere new, and, probably, animates dreamscapes when you sleep.

Now, Kahana's lab at the University of Pennsylvania, in conjunction with teams at Jefferson University Medical College in Philadelphia and UCLA, has discovered the first evidence of another navigational cell in the human brain previously found in monkeys, rats, and bats: Grid cells. Grid cells communicate with place cells by telling them about your location. Like your phone's gyroscope, these grid cells determine location by analyzing movement information from your limbs, ears, and eyes. Unlike place cells, which only fire for one spot, each grid cell responds to a pattern of places, forming a triangular–lattice "grid" map of space.

Kahana says that these place–cell and grid–cell navigational systems work together to help us move through space.

"The cognitive mapping system probably relies on numerous representational variables," he says. "Place and grid cells are both important, and there are probably a bunch of other ones, too."

So grid cells aren't the only navigation cells in the brain, but they're a critical component of the system. Finding them in humans shows that the mammalian brain's version of GPS seems to span the evolutionary ladder, from rats to bats to people.

Where No One Has Gone Before

Kahana's grid–cell study charted new territory, in this case a scientific frontier. Research ethics allow scientists to implant electrodes into animal brains, like mice, to record brain activity while they perform tasks like running through mazes. But implantation in humans, which requires invasive brain surgery, is completely unethical. So how did Kahana dive into the human brain?

He found study participants who already have electrodes in their head: Epileptic patients. People with epilepsy regularly have electrodes installed via brain surgery to monitor their seizures. Because seizures commonly originate from the temporal lobe, the part of cortex just under your temples that includes hippocampus and the amygdala, many epileptic recordings come directly from the parts of the brain involved in spatial navigation and memory. While having their brains clinically monitored in this way, many epileptic patients agree to let scientists gather data as well.

Even with willing participants with the necessary brain implants, Kahana ran into another problem. The 14 patients involved in the study had to be hooked up to wires and hospital equipment, so they couldn't very well be asked to walk a maze.

Instead, the scientists asked their subjects to navigate virtual space using a video game, which was developed by a comp–sci undergrad in Kahana's lab using PandaEPL, a library for programming spatial experiments created by Kahana's ex–student Alec Solway. The goal of the game is to bicycle around a cityscape, searching for four hidden objects.

Each patient searched for each object 12 times. As they learned the environment, they gradually got the hang of the routes, reducing their search time from 14 to 8 seconds on average. The virtual space they biked around was divided up into a 28–square array by the researchers. As the patients navigated from place to place, the scientists watched 893 cells in their entorhinal, hippocamal, parahippocampal, and cingulate cortices, as well as the amygdala, the mood–related structure buried deep in the temporal lobe.

Like all great explorers, Kahana understands that making discoveries like grid cells requires a combination of grit, determination, and the willingness to go where no one has before.

"You have to have the type of personality where when somebody says something can't be done, you say it can, just to be contrarian. That's the personality of a CEO, or somebody who starts their own business," says Kahana. "You have to be able to go deeper, or go to a new place where no one could go before, either because the technology didn't exist, or no one was willing to go to the effort. If everything already exists that you need to make the discovery, probably you were wrong. The way you get a real discovery is by literally breaking new ground."

By doing what no one had ever done before, watching human neurons fire during real–time navigation, Kahana found something new: Cells that fire at points on a spatial grid, just like the ones previously seen in animals. Maps in our heads.

One More Thing

Kahana wouldn't let me run this article without mentioning one crucial fact about his research: There is no scientific evidence that women are worse at navigating than men, despite how many times reporters ask him the question.

"I always tell them the same thing: There's no scientific evidence to support the claim that women have worse spatial memory than men," says Kahana. "I've told [journalists] this every time; they've never quoted it. It's interesting to me that somehow society has decided women can't find their way around, and no scientific evidence in the world will convince them otherwise. Now let's talk about what you want to talk about. I'm done with my diatribe."

[Image: Flickr user Jason Tamez]

Genomic Vending Machines Coming To University Campuses

$
0
0

As college students make their way back to campus this week, they might find a peculiar vending machine posted up around the biology department. A Palo Alto company is doing university pilots with a series of machines that dole out super complex data computations to passersby, democratizing access to computing power which was, for many schools or individual students, financially unreachable.

The machines analyze both full genome sequences and exomes, or portions of genomes (typically 1 to 2 percent) where 85% of common diseases stemming from malformed proteins and erratic genetic code tend to surface. The jury's still out about whether repeatedly sequencing exomes is as scientifically rigorous as sequencing the whole genome, but exome analysis is faster and cheaper––Bina's machine claims to do it in 30 minutes.

For all you genome crunchers, Bina Technologies has a novel new solution––and while it'll cost a little more than a dollar bill, their vending machine–style pay–per–computation machines are offering cost–effective processing without the need to purchase equipment.

Bina's machine, the Bina Genomic Analysis Platform, was unleashed on the public in April 2012 as the latest in ever–smaller and more affordable gene sequencing devices that use "analysis pipelines" devised of finely tuned algorithms and a bank of CPUs, GPUs, and FPGAs in place of a supercomputer. Indeed, when the machine was released, Bina claimed the hot–rod setup outpaced the affordable alternative, the Amazon Web Services cloud, by 10 to 100 times.

Cutting down full genome sequencing from nearly a week to under four hours is an obvious advantage for departments sharing any expensive machine, but Bina Technologies' shift to per–computation charging offers options for researchers operating on different projects with different schedules: Instead of scheduling around monthlong device rentals, as Bina Technologies previously offered, the Bina On–Demand service bills at the end of the month.

The arrangement is still experimental, however, and there's little point for Bina Technologies to keep machines around if researchers don't charter enough computations to make it profitable. Installations will be given a six–month grace period to get campus researchers used to the idea of the machine's on–demand, per–computation setup.

[Image: Flickr user Peter Thoeny]

Starbucks Could Use Your Social Data To Find New Locations

$
0
0

Deciding where to build a new coffee shop or fast food outlet is an expensive and risky business. Traditionally, planners use data on demographics, revenue, nearby businesses, and aggregated human flow, much of which is expensive to gather. But the expense is worth it, because when it comes to foot traffic, even a few feet can make a huge difference.

"Open a new coffee shop in one street corner and it may thrive with hundreds of customers. Open it a few hundred meters down the road and it may close in a matter of months," explains University of Cambridge researcher Anastasios Noulas and colleagues in a new paper that puts a social spin on choosing the best retail location.

In addition to the usual geographical data, Noulas's team wanted to see if adding freely available Foursquare check–in data combined with Machine Learning algorithms to the mix could help planners choose better locations. The team focused on three chains ubiquitous in New York, Foursquare's home turf: Starbucks, McDonald's, and Dunkin' Donuts.

The researchers started by looking at features that might affect foot traffic, like what other businesses operated near a given location, including the number of competitors, and how diverse those businesses were. They also took into account nearby landmarks that help attract customers. People coming out of a train station, for example, often head to a Starbucks, so features like these were included in location descriptions.

Then the team turned to Foursquare, which they used to understand how people flow between locations. By analyzing 620,932 check–ins shared on Twitter over a period of 5 months (about 25% of all Foursquare check–ins during that period), they were able to determine if an entire area is popular, instead of just one retail location, and analyze how users move from one retail location to another within an area and from outside it. For each feature they identified, the team computed a score that they used to rank each candidate location.

These features and values were used to describe each location and train several different supervised Machine Learning algorithms: Support Vector Regression, M5 decision trees, and Linear Regression with regularization. Each algorithm was trained 1,000 times using a random sample of two–thirds of the locations and their known levels of popularity. The trained algorithms then tried to predict how popular the remaining third of locations would be. The result was a ranking of the locations with the optimal location at the top of the ranking. The predicted ranking was then compared with the true popularity of those locations measured via Foursquare check–ins.

The team found that the check–in patterns for each of the three chains were unique. Starbucks locations had five times as many check–ins as McDonald's and Dunkin' Donuts, a difference not entirely accounted for by the fact that Starbucks has twice as many stores as McDonald's and Dunkin' Donuts in the area around Manhattan. A Starbucks was also much more likely to be located by a train station than the other two chains.

The most predictive individual features of any given location also varied between chains. Competitiveness was the most predictive feature for Starbucks, indicating that the stores do better when they face less competition from nearby competitors. Incoming Flow, or customers coming from outside the retail area, was the top feature for McDonald's, whose customers will travel further for a burger. Dunkin' Donuts, on the other hand, sees most business from customers stopping in to refresh themselves during a shopping spree. The number of customers who also checked in at nearby other local businesses was the most important feature for the chain.

In spite of these differences, the study found that a combination of traditional geographical and Foursquare–based mobility features turned out the best predictions for all three chains. Using the methodology, the researchers chose locations for Starbucks with a 67% accuracy overall, and 76% when predicting the top 10% and the top 15% most popular locations.

But don't think that you can attract mores stores to your neighborhood by checking in just yet. Although their results seem to show a correlation, one possible flaw in the research is that Foursquare check–ins are used as a proxy for the popularity of a location. The researchers don't provide evidence in the paper to show that check–in numbers translate into overall popularity.


Keep reading: Who's Afraid of Data Science? What Your Company Should Know About Data


Previous Updates


Is Data Science Just "Sexed Up" Statistics?

August 14, 2013

Superstar statistician Nate Silver recently ruffled some feathers in the data science world by proclaiming that "Data scientist is just a sexed up word for statistician." Now IT industry analyst Robin Bloor has claimed that there is no such thing as data science, because a science must apply the scientific method to a particular domain:

Science studies a particular domain, whether it be chemical, biological, physical or whatever. This gives us the sciences of chemistry, biology, physics, etc. Those who study such domains will gather data in one way or another, often by formulating experiments and taking readings. In other words, they gather data. If there were a particular activity devoted to studying data, then there might be some virtue in the term "data science." And indeed there is such an activity, and it already has a name: it is a branch of mathematics called statistics.

Statistics Versus Data Science

So is data science just statistics by another name? Data scientists seem to view statistics more as a tool they use to a greater or lesser degree in their work rather than the domain of their science, as Bloor suggests. The relationship is kind of like the one between the content of the theoretical courses you'll find in a computer science degree and what a working coder actually does day to day.

Data scientist Hilary Mason (formerly of Bitly, now Accel Partners) made this comment about Silver's claim: "I'm a computer scientist by training who explores data and builds algorithms, systems, and products around data. I use statistics in my practice, but would never claim to be an expert statistician."

O'Reilly's Analyzing the Analyzers report seems to confirm the idea that statistics is just one tool of data science rather than the focus of the field. The study showed that data science already involves a range of roles from data businessperson to data researcher, with statistics featuring much more prominently in some roles than others.

Statistics Versus Machine Learning

Commenters on Bloor's post also pointed to the extensive use of machine learning, and not just statistics, in the data science world. The overlaps and differences between machine learning and statistics is in itself a contentious issue, as both fields are interested in learning from data. They just have different objectives and go about it in different ways. Data scientist and Machine Learning for Hackers author Drew Conway explains the difference this way:

Statisticians approach their work by first observing some phenomenon in the world, and then attempting to specify a model –– often formally –– to describe that phenomenon. Machine learners often begin their work by possessing a large number of observations of some real world process, i.e. data, and then impose structure on that data to make inferences about the data generating process.

The online debate implies that statisticians are interested in the causality and the interpretability of the formal models of the world they create. The more engineering–oriented machine learning community uses statistical methods in some of its algorithms, but is more interested in solving a practical problem in an accurate way even if the model built by the machine learning algorithm is not easily understandable. Data scientist John Mount described the distinction as follows:

The goal of machine learning is to create a predictive model that is indistinguishable from a correct model. This is an operational attitude that tends to offend statisticians who want a model that not only appears to be accurate but is in fact correct (i.e. also has some explanatory value).

But data scientists don't see statistics or machine learning as encompassing the entirety of their discipline. Mount goes on to say:

Data science is a term I use to represent the ownership and management of the entire modeling process: discovering the true business need, collecting data, managing data, building models and deploying models into production. Machine learning and statistics may be the stars, but data science the whole show.

Data Science Versus Natural Science

Finally, the comments on Bloor's post dove into the prickly point of whether the word "science" should be used at all in this context given that it implies repeatability and peer review, neither of which may apply to data science done in commercial companies. Here, it's useful to again point to the differences between theoretical computer science and then the everyday work of the average hacker, which is more engineering than lab work. Drew Conway captures this distinction nicely:

As data science matures as a discipline I think it will be closer to a trade discipline than a scientific one. Much in the same way there are practicing physicians, and research physicians. Practicing doctors have to constantly review medical research, and maintain their understanding of new technologies in order to best serve their patients. Conversely, research physicians run experiments and build knowledge that other doctors can implement. Data scientists will implement the work of statisticians, machine learners, mathematicians, computer scientists etc., but very likely spend little to no time building new models or methods.

Much like computer science, the data science landscape may eventually diverge into two distinct, but cooperative research and practical branches. When that happens, we may need another name, like data engineering, to describe the practical side of the field. For now, the debate over defining data science says more about the nascent, evolving nature of the field than it does the actual differences between branches.


Riot Looters Actually Behave Like Shoppers, Says Data

July 22, 2013

London was engulfed by five days of riots in August 2011, the worst civil unrest the U.K. had seen in 20 years. The looting, arson, and violence during the riots resulted in five deaths, many injuries, and a property damage bill of up to £250 million ($380 million). A new video from mathematician Hannah Fry explains the patterns in rioters' behavior and how police can use them to quell future unrest.

"We found three very simple patterns. These patterns are incredibly important since we can use them to predict how a riot will spread, help the police to design better policing strategies and ultimately stop them from spreading."

The model of the riots created by Fry and her team, which is further described in a Nature paper, used crime data provided by London's Metropolitan Police covering the period August 6–11th, 2011 for offenses associated with the riots, a dataset of 3,914 records. This was combined with geographical and retail data and a set of mathematical equations capturing the patterns the researchers used to try and model the behavior of rioters.

Some newspapers at the time dubbed the riots "shopping with violence." It seems they weren't far from wrong.

The first pattern is comparing rioters to everyday shoppers. Over 30 percent of rioters travelled less than a kilometer from where they lived to where they offended but they were prepared to go a bit further if there was a riot site which was really big or they had very little chance of getting caught or there was a lot of lootable goods. This picture is exactly what you see when you look at a similar picture of retail spending flows. Most people shop locally to where they live but they are prepared to travel a bit further to a really attractive retail site. We know an awful lot about how people shop since this information is invaluable to retailers, being able to predict where people will spend their money.

Riots broke out in many parts of the city, but while some areas were heavily hit, others remained completely unscathed. Fry's team hypothesized that this was partly due to the shopping behavior described above, but also the interaction of police and rioters and how the idea of rioting spread throughout the city. The researchers guessed that rioters would be attracted to sites which not only offered good looting opportunities but also fewer police or more rioters, making them less likely to get caught.

If police were heavily outnumbered at a particular site, the Metropolitan police has stated that "decisions were made not to arrest due to the prioritization of competing demands…specifically, the need to protect emergency services, prevent the spread of further disorder and hold ground until the arrival of more police resources." So once a riot site spiraled out of control, even if police were on the ground, rioters were unlikely to be caught when they were present in large numbers. The Nature paper concluded that the speed and numbers in which police arrived at a particular riot site was crucial in quelling violence. The team's simulations also showed that around 10,000 police would have been needed to suppress disorder. Only 5,000 were deployed during the first days of the riots.

Combining the shopping and "predator–prey" analogy of police and rioters, the team's model predicted pretty accurately which areas would be hardest hit by the riots. Map a) shows actual riot behavior while map b) shows the result of a simulation using the model. 26 of the 33 London boroughs in the simulation showed rioter percentages in the same or adjacent bands as the crime data.

Fry describes the final pattern in the team's model––how the idea to riot spread through the city:

Imagine you have one young guy who walks past a Foot Locker getting raided and he runs in and gets himself some new trainers. He then texts a couple of his friends to come down to join him. They then text a couple more of their friends, who text more of their friends. Suddenly, one spur of the moment decision by one person has grown into a huge outburst of criminal behavior. Before we talked about places which are more susceptible to rioting. Now we are talking about people who are more susceptible to the idea of rioting. The clearest link here is deprivation. The people who were involved in the riots came from some of the most deprived areas of the city, the places with the worst schools, the highest unemployment rates and the lowest incomes. The boroughs who were the worst hit by the riots were also the boroughs which had the biggest cuts in the recent government funding, and in particular disproportionate cuts in youth services. The data points to the fact that a spark was lit in a vulnerable community, and this spark ignited to engulf the entire city and eventually the country.


How Forensic Linguistics Uncovered J.K. Rowling's Secret Book

July 16, 2013

On Sunday, U.K. newspaper The Sunday Times revealed that J.K. Rowling was the true author of crime thriller The Cuckoo's Calling, which she published several months ago under the name Robert Galbraith. The paper was first alerted by an anonymous Twitter tip–off and Time reports that the paper called in Pittsburgh–based professor of computer science Patrick Juola, to help them determine whether the text had indeed been written by Rowling. Joula specializes in forensic linguistics, also known as "stylometry," which can help attribute an author to a text.

Juola has been researching the subject––now called forensic linguistics, with a focus on authorship attribution––for about a decade. He uses a computer program to analyze and compare word usage in different texts, with the goal of determining whether they were written by the same person. The science is more frequently applied in legal cases, such as with wills of questionable origin, but it works with literature too.

Joula is one of the developers of the catchily titled Java Graphical Authorship Attribution Program (JGAAP), which he used to extract the 100 most commonly used words in Rowling's text, not including character names.

What the author won't think to change are the short words, the articles and prepositions. "Propositions and articles and similar little function words are actually very individual," Juola says. "It's actually very, very hard to change them because they're so subconscious."

Author attribution is not a precise science. In a 2006 paper Joula and co–author John Sofko described statistical and computational methods for authorship attribution as "neither reliable, well–regarded, widely–used, or well–understood." JGAAP was the authors' response to the "unorganized mess" of author attribution and is intended for use by non–specialists.

JGAAP implements several steps: canonicalization, identification of events, culling, and then analysis using a Machine Learning algorithm. Canonicalization converts data that has more than one possible representation into a standard form. In the case of text, this will mean doing things like converting all capital letters into lower case and removing punctuation. An event in a text may be the occurrence of a word, character, or part of speech. Culling reduces the number of events to, say, the 100 most common words, and uses this as a representation of the source text.

Finally, a Machine Learning classification algorithm like a Support Vector Machine, or K–Nearest Neighbors, uses this representation to compare the unknown text with texts by known authors in a training set and predicts which one was most likely to have written it. Joula reveals in the Time interview that Rowling's The Casual Vacancy, Ruth Rendell's The St. Zita Society, P.D. James's The Private Patient, and Val McDermid's The Wire in the Blood were the other texts used in training––a pretty small sample. If an author not on this list had written The Cuckoo's Calling, then JGAAP could not have identified him but only the known author closest in style. Joula determined that Rowling was the most likely of these four authors to have written the book and Rowling later admitted that this was the case.


GitHub Reveals A Formula For Your "Hacker Persona"

July 11, 2013

Last year, Google developer Ilya Grigorik and GitHub marketeer Brian Doll did a talk at O'Reilly Strata on what makes developers happy and angry, programming language associations, and GitHub activity by country and language. All were results from the first GitHub Data Challenge and the activities of researchers using GitHub's public timeline data. Github has now announced the results of the second challenge.

The data was made available via an API and as a Google BigQuery dataset. BigQuery is a Web service that lets you do interactive analysis of massive datasets in an SQL–like query language. Grigorik's favorite winning entry is the Open Source Report Card, which uses clustering and a simple expert system to generate a natural language description of your hacker personality and a weekly work tempo, and to identify other Github users who are similar to you. You can see an extract from Grigorik's report card below and generate your own.

The Open Source Report Card was developed by astro–phycisist Dan Foreman–Mackey. He calculated statistics summarizing the weekly activity of a GitHub user and then clustered them to find groups like the "Tuesday tinkerer" and "Fulltime hacker."

I extracted the set of weekly schedule vectors for 10,000 "moderately active" GitHub users and ran K–means (with K=12) on this sample set. K–means is an algorithm for the unsupervised clustering of samples in a vector space.

Foreman–Mackey then summarized the behavior of each user in a 61–dimensional vector, which includes features like the number of contributions, active repositories and languages used, and he ran an approximate nearest neighbor algorithm to identify other users who are similar to you based on your behavior.

The final step was generating a natural language description of a particular hacker. "I made up a bunch of rules (implemented as a spaghetti of nested if statements) that concatenate strings until something like English prose falls out the other end," he says.

The data challenge is just one aspect of the work being done with Github's timeline data. Brian Doll explains:

A dozen or so academic research papers have been written in the last year that use the GitHub timeline data as their primary data source. Some of the research papers tried to better understand what makes software projects popular. They analyzed activity, time frames, and language across several projects to see if they could determine factors that make projects more likely to be widely adopted.

GitHub is also looking at packaging the data in alternative ways to a stream of activity ordered by time.

What many researchers want instead, is a package of specific projects, with all of its public history, along with the actual software repository data itself, to be bundled up together. I'm planning on releasing large data bundles like this to the public later this summer.


What Kind Of Data Scientist Are You?

July 3, 2013

For a profession whose entire "raison d'etre" is quantitative analysis, the role of the data scientist has been surprisingly hard to pin down. Now a new e–book from O'Reilly, Analyzing the Analyzers, has surveyed 250 Data Scientists on how they see themselves and their work. The authors then applied the tools of their trade, in this case a Non–negative Matrix Factorization algorithm to cluster the data, revealing the four archetypes of the data scientist. It also found that most Data Scientists, no matter which group they fell into, "rarely work with terabyte or larger data."

We think that terms like "data scientist," "analytics," and "big data" are the result of what one might call a "buzzword meat grinder." We'll use the survey results to identify a new, more precise vocabulary for talking about their work, based on how data scientists describe themselves and their skills.

So who are these new data scientists? A Data Businessperson is focused on how data insights can affect a company's bottom line or "translates P–values into profits". This group seems very similar to the old–school Data Analyst, whose skills have sometimes been unjustly discounted in the pursuit of the more fashionable Data Scientist. In fact, only about a quarter of this group described themselves as Data Scientists. Nearly a quarter are female, a much higher proportion than the other types, and they are most likely to have an MBA, have managed other employees or started a business.

The Data Creatives have the broadest range of skills in the survey. They can code and have contributed to open source projects, three quarters have academic experience and creatives are even more likely than Data Businesspeople to have done contract work (80%) or have started a business (40%). Creatives closely identify with the self–description "artist". Psychologists, economists and political scientists, popped up surprising often among Data Researchers and Data Creatives.

Data Developers build data infrastructure or code up Machine Learning algorithms. This group is the most likely to code on a daily basis and have Hadoop, SVM or Scala on their CV. About half have Computer Science or Computer Engineering degrees.

Data Researchers seem closest to "scientists" in the sense that their work is more open ended and most are lapsed academics. Nearly 75% of Data Researchers had published in peer–reviewed journals, and over half had a PhD. Statistics is their top skill but they were least likely to have started a business, and only half had managed an employee.

Although we hate to disappoint the majority of the tech press, who seem to conflate the terms "Big Data" and "Data Science," most of the Data Scientists surveyed don't actually work with Big Data at all. The figure below shows how often respondents worked with data of kilobyte, megabyte, gigabyte, terabyte, and petabyte scale. Data Developers were most likely to work with petabytes of data, but even among developers this was rare.


How Machine Learning Helps People Make Babies

June 25, 2013

How far will you go to have a baby? That's the question facing the one in six couples suffering from infertility in the United States, fewer than 3% of which undergo IVF. A single round of IVF can cost up to $15,000, and the success rate for women over 40 is often less than 12% per round, making the process both financially and physically taxing. According to the CEO of Univfy, Mylene Yao, the couples with the highest likelihood of success are often not the ones who receive treatment. "A lot of women are not aware of what IVF can do for them and are getting to IVF rather late," says Yao. "On the other hand, a lot of women may be doing treatments which have lower chances of success."

Yao is an Ob/Gyn and researcher in reproductive medicine who teamed up with a colleague at Stanford, professor of statistics Wing H. Wong, to create a model that could predict the probability that a live birth will result from a single round of IVF. That model is now used in an online personalized predictive test that uses clinical data readily available to patients.

The main factor currently used to predict IVF success is the age of the woman undergoing treatment. "Every country has a national registry that lists the IVF success rate by age group," Yao explains. "What we have shown over and over in our research papers is that method vastly underestimates the possibility of success. It's a population average. It's not specific to any individual woman and is useful only at a high–level country policy level." Many European countries, whose health services fund IVF for certain couples, use such age charts to determine the maximum age of the women who can receive treatment. Yao argues that, instead, European governments should fund couples with the highest likelihood of success.

"People are mixing up two ideas. Everyone knows that aging will compromise your ability to conceive, but the ovaries of each woman age at a different pace. Unless they take into consideration factors like BMI, partner's sperm count, blood tests, reproductive history, that age is not useful. In our prediction model, for patients who have never had IVF, age accounts for 60% of the prediction; 40% of the prediction comes from other sources of information. A 33–year–old woman can be erroneously led to believe that she has great chances, whereas her IVF prospects may be very limited and if she waits further, it could compromise her chance to have a family. A 40–year–old might not even see a doctor because she thinks there is no chance." In a 2013 research paper Univfy showed that 86% of cases analyzed did not have the results predicted by age alone. Nearly 60% had a higher probability of live birth based on an analysis of the patients' complete reproductive profiles.

Univfy's predictive model was built from data on 13,076 first IVF treatment cycles from three different clinics and used input parameters such as the woman's body mass index, smoking status, previous miscarriages or live births, clinical diagnoses including polycystic ovarian syndrome or disease, and her male partner's age and sperm count. "If a patient says 'I have one baby,' that's worth as much as what the blood tests show," says Yao.

Prediction of the probability of a live birth based on these parameters is a regression problem, where a continuous value is predicted from the values of the input parameters. A machine–learning algorithm called stochastic gradient boosting was used to build a boosted tree model predicting the probability of a live birth. A boosting algorithm builds a series of models, in this case up to 70,000 decision trees, which are essentially a series of if–then statements based on the values of the input parameters. Each new tree is created from a random sample of the full data set and uses the prediction errors from the last tree to improve its own predictions. The resulting model determines the relative importance of each input parameter. It turned out that while the age of the patient was still the most significant factor at more than 60% weighting, other single parameters like sperm count (9.6%) and body mass index (9.5%) were also significant.

Another Univfy model used by IVF clinics predicts the probability of multiple births. Some 30% of women receiving IVF in 2008 in the U.S. gave birth to twins since clinics often use multiple embryos to increase the chances of success. "Multiple births are associated with complications for the newborn and the mother, " says Yao. "So for health reasons, clinics and governments want to have as few multiple births as possible. It's a difficult decision whether to put in one or two embryos." Univfy's results showed that even when only two embryos were transferred, patients' risks of twins ranged from 12% to 55%. "The clinic can make a protocol that when the probability of multiple births is above a certain rate, then they will have only one embryo, and also identify patients who should really have two embryos. Currently there's a lot of guesswork."


When the G8 meet in Northern Ireland next week, transparency will be on the agenda. But how do these governments themselves rate?

Open data campaigners the Open Knowledge foundation just published a preview of an Open data Census which assessed how open the critical datasets in the G8 countries really are. Open data doesn't just mean making datasets available to the public but also distributing them in a format which is easy to process, available in bulk, and regularly updated. When the regional government of Piemonte, Italy was hit by an expenses scandal last year, the government published the expense claim data from the previous year in a set of spreadsheets embedded in a PDF, a typical example of less than accessible "open data."
More than 30 volunteer contributors from around the world (the foundation says they include lawyers, researchers, policy experts, journalists, data wranglers, and civic hackers) assessed the openness of data sets in 10 core categories: Election Results, Company Register, National Map, Government Budgets, Government Spending, Legislation, National Statistical Office Data, Postcode/ZIP database, Public Transport Timetables, and Environmental Data on major sources of pollutants.

The U.S. topped the list for openness according to the overall score summed across all 10 categories of data, indicating that the executive order "making open and machine readable the new default for government information" announced by president Barack Obama in May this year has had some effect. The U.K. was next, followed by France, Japan, Canada, Germany, and Italy. The Russian Federation limped in last, failing to publish any of the information considered by the census as open data.

Postcode data, which is required for almost all location–based applications and services, is not easily accessible in any G8 country except Germany. "In the U.K., there's quite a big fight over postcodes since Royal Mail sells the postcodes and makes millions of pounds a year," said Open Knowledge Foundation founder Rufus Pollock. Data on government spending was also patchy in France, Germany, and Italy. Many G8 countries scored low on company registry data, a notable point when the G8's transparency discussions will address tax evasion and offshore companies. "Government processes aren't always built for the digital age," said Pollock. "I heard an incredible story about 501(c)3 registration information in the U.S. where they get all this machine–readable data in and the first they do is convert it to PDF which then humans transcribe."

The data was assessed in line with open data standards such as OpenDefinition.org and the 8 Principles of Open Government Data. Each category of data was given marks out of six depending on how many of the following criteria were met: openly licensed, publicly available, in digital format, machine readable (in a structured file format), up to date, and available in bulk. The assessment wasn't entirely quantitative. "We strive for reasonably 'yes or no' questions but there are subtleties," says Pollock. With transport and timetables there's rail, bus, and other public transport. What happens if your bus and tram timetables are available but not train? Or is a certain format acceptable as machine readable?"

The preview does not show how many data sets were assessed in each category but more information will be included in the full results covering 50 countries will be released later this year. For further information on the methodology of the census see the Open Knowledge Foundation's blog post.


Olly Downs runs the Data Science team at Globys, a company which takes large–scale dynamic data from mobile operators and uses it to contextualize their marketing in order to improve customer experience, increase revenue, and maximize retention. Downs is no data novice: He was the Principal Scientist at traffic prediction startup INRIX, which is planning an IPO this year, and Globys is his seventh early–stage company. Co.Labs talked to him about how to maximize the ROI of a data science team.

How does Globys use its Data Science team?

The Telco space has always been Big Data. Any single one of our mobile operator customers produces data at a rate greater than Facebook. Globys is unique in terms of the scale with which we have embraced the data science role and its impact on the structure of the company and the core proposition of the business. Often data scientists in an organization are pseudo–consultants answering one–off questions. Our data science team is devoted to developing the science behind the technology of our core product.

You trained in hard sciences (Physics at Cambridge and Applied Mathematics at Princeton). Is Data Science really a science?

How we work at Globys is that we develop technologies via a series of experiments. Those experiments are designed to be extremely robust, as they would be in the hard sciences world, but based on data which is only available commercially, and they are designed to answer a business question rather than a fundamental research question. The methodology we use to determine if a technology is useful has the same core elements of a scientific process.

What has come along with the data science field is this cloudburst of new technologies. The science has become mixed in with mastering the technology which allows you to do the science. It's not that surprising. The web was invented by Tim Berners Lee at CERN to exchange scientific data in particle physics. Out in the applied world, the work tends to be a mixture of answering questions and finding the right questions to ask. It's very easy for a data science team to slip into being a pseudo–engineering team on an engineering schedule. It's very important to have a proportion of your time allocated to exploratory work so you have the ability to go down some rabbit holes.

How can a company integrate a data scientist into their business?

With Big Data, the awareness in the enterprise is high and the motivation to do Big Data initiatives is high, but the cultural ability to absorb the business value, and the strategic shift that might bring, is hard to achieve. My experience is if the data scientist is not viewed as a peer–level stakeholder who can have an impact on the leadership and the culture of the business, then most likely they won't be successful.

I remember working on a project on a "Save" program where anyone who called to cancel their service got two months free. The director who initiated that program had gotten a significant promotion based on its success. It wasn't a data–driven initiative. The anecdotes were much more positive than the data. What I found, after some data analysis, was that the program was retaining customers for an average of 1.2 months after the customer had been given the two months free. Every saved call the customer was taking, which included the cost of the agent talking to the customer, was actually losing them $13 per call. We came with a model–based solution which allowed the business to test who they should allow to cancel and who they should talk to. That changed the ROI on the program to plus–$22 per call taken. That stakeholder then made it from general manager to VP, and ultimately was very happy, but it took a while to make the shift to seeing that data was ultimately going to improve the outcome.

Can't you fool yourself with data as well as with anecdotes?

As a data scientist, it's hard to come to a piece of analysis without a hypothesis, a Bayesian prior (probability), a model in your mind of how you think things probably are. That makes it difficult to interrogate the data in the purest way, and even if you do, you are manipulating a subset of attributes or individuals who represent the problem that you have. Being aware of the limitations of the analysis is important. A real problem with communicating the work you have done is that while scientists are very good at explaining the caveats, the people listening are not interested in caveats but in conclusions. I remember doing an all–day experiment when I was at Cambridge to measure the Gravitational constant to four decimal places of precision. I measured to four levels of precision but the result was incorrect in the constant by more than a thousand times. You can fool yourself into thinking you have measured something with a very high level of accuracy and yet the thing you were measuring turned out to be the wrong thing.

How do measure the ROI (Return on Investment) of a data science team?

The measure of success is getting initiatives to completion, addressing a finding about the business is a measurable way. At Globys our business is around getting paid for the extra retention or revenue that we achieve for our customers. Recently, we have been leveraging the idea of every customer as a sequence of events––every purchase, every call, every SMS message, every data session, every top–up purchase––which allows us to take Machine Learning approaches (Dynamic State–Space Modeling) which otherwise do not apply to this problem domain of retaining customers. This approach outperforms the current state of the art in churn prediction modeling by about 200%. When you run an experiment to retain customers, the proportion of customers you are messaging to is biased in favor of those with a problem. We already have an optimized price elasticity and offer targeting capability so you improve your campaign by a similar factor. The normal improvement you would achieve in churn retention is in the 5% range. We are achieving improvements in the 15–20% range.

What's the biggest gap you see in data science teams?

Since our customers have been Big Data businesses for a long time, they will typically have analysts, and many of those teams are unsuccessful because the communications skill set is missing. They may have the development capability, the statistical and modeling capability, but have been very weak at communicating with the other elements of the business. What we are seeing now is some roles being hired which bridge between the data science capability and the business functions like marketing and finance. It's a product manager for the analytics team.


This story tracks the cult of Big Data: The hype and the reality. It's everything you ever wanted to know about data science but were afraid to ask. Read on to learn why we're covering this story, or skip ahead to read previous updates.

Take lots of data and analyze it: That's what data scientists do and it's yielding all sorts of conclusions that weren't previously attainable. We can discover how our cities are run, disasters are tackled, workers are hired, crimes are committed, or even how Cupid's arrows find their mark. Conclusions derived from data are affecting our lives and are likely to shape much of the future.


Previous Updates

Meteorologist Steven Bennett used to predict the weather for hedge funds. Now his startup EarthRisk forecasts extreme cold and warmth events up to four weeks ahead, much further in the future than traditional forecasts, for energy companies and utilities. The company has compiled 30 years of temperature data for nine regions and discovered patterns which predict extreme heat or cold. If the temperature falls in the hottest or coldest 25% of the historical temperature distribution, EarthRisk defines this as an extreme event and the company's energy customers can trade power or set their pricing based on its predictions. The company's next challenge is researching how to extend hurricane forecasts from the current 48 hours to up to 10 days.

How is your approach to weather forecasting different from traditional techniques?

Meteorology has traditionally been pursued along two lines. One line has a modeling focus and has been pursued by large government or quasi–government agencies. It puts the Earth into a computer–based simulation and that simulation predicts the weather. That pursuit has been ongoing since the 1950s. It requires supercomputers, it requires a lot of resources (The National Oceanic and Atmospheric Administration in the U.S. has spent billions of dollars on its simulation) and a tremendous amount of data to input to the model. The second line of forecasting is the observational approach. Farmers were predicting the weather long before there were professional meteorologists and the way they did it was through observation. They would observe that if the wind blew from a particular direction, it's fair weather for several days. We take the observational approach, the database which was in the farmer's head, but we quantify all the observations strictly in a statistical computer model rather than a dynamic model of the type of the government uses. We quantify, we catalog, and we build statistical models around these observations. We have created a catalog of thousands of weather patterns which have been observed since the 1940s and how those patterns tend to link to extreme weather events one to four weeks after the pattern is observed.

Which approach is more accurate?

The model–based approach will result in a more accurate forecast but because of chaos in the system it breaks down 1–2 weeks into the forecast. For a computer simulation to be perfect we would need to observe every air parcel on the Earth to use as input to the model. In fact, there are huge swathes of the planet, e.g., over the Pacific Ocean, where we don't have any weather observations at all except from satellites. So in the long range our forecasts are more accurate, but not in the short–range.

What data analysis techniques do you use?

We are using Machine Learning to link weather patterns together, to say when these kind of weather patterns occur historically they lead to these sorts of events. Our operational system uses a Genetic Algorithm for combining the patterns in a simple way and determining which patterns are the most important. We use NaiveBayes to make the forecast. We forecast, for example, that there is 60% chance that there will be an extreme cold event in the northwestern United States three weeks from today. If the temperature is a quarter of a degree less than that cold event threshold, then it's not a hit. We are in the process of researching a neural network, which we believe will give us a richer set of outputs. With the neural network we believe that instead of giving the percentage chance of crossing some threshold, we will be able to develop a full distribution of temperature output, e.g., that it will be 1 degree colder than normal.

How do you run these simulations?

We update the data every day. We have a MatLab–based modeling infrastructure. When we do our heavy processing, we will use hundreds of cores in the Amazon cloud. We do those big runs a couple of dozen times a year.

How do you measure forecast accuracy?

Since we forecast for extreme events, we use a few different metrics. If we forecast an extreme event and it occurs, then that's a hit. If we forecast an extreme event and it does not occur, that's a false alarm. Those can be misleading. If I have made one forecast and that forecast was correct, then I have a hit rate of 100% and a false alarm rate of 0%. But if there were 100 events and I only forecasted one of them and missed the other 99, that's not useful. The detection rate is the number of events that occur which we forecast. We try to get a high hit rate and detection rate but in a long–range forecast detection rate is very, very difficult. Our detection rate tends to be around 30% in a 3–week forecast. Our hit rate stays roughly the same at one week, 2 week, 3 weeks. In traditional weather forecasting the accuracy gets dramatically lower the further out you get.

Why do you want to forecast hurricanes further ahead?

The primary application for longer lead forecasts for hurricane landfall would be in the business community rather than for public safety. For public safety you need to make sure that you give people enough time to evacuate but also have the most accurate forecast. That lead time is typically two to three days right now. If people evacuate and the storm does not do damage in that area, or never hits that area, people won't listen the next time the forecast is issued. Businesses understand probability so you can present a risk assessment to a corporation which has a large footprint in a particular geography. They may have to change their operations significantly in advance of a hurricane so if it's even 30% or 40% probability then they need extra lead time.

What data can you look at to provide an advance forecast?

We are investigating whether building a catalog of synoptic (large scale) weather patterns like the North Atlantic oscillation will work for predicting hurricanes, especially hurricane tracks––so where a hurricane will move. We have quantified hundreds of weather patterns which are of the same amplitude, hundreds of miles across. For heat and cold risks we develop an index of extreme temperature. For hurricanes the primary input is an index of historic hurricane activity rather than temperature. Then you would use Machine Learning to link the weather patterns to the hurricane activity. All of this is a hypothesis right now. It's not tested yet.

What's the next problem you want to tackle?

We worked with a consortium of energy companies to develop this product. It was specifically developed for their use. Right now the problems we are trying to solve are weather related but that's not where we see ourselves in two or five years. The weather data we have is only an input to a much bigger business problem and that problem will vary from industry to industry. What we are really interested in is helping our customers solve their business problems. In New York City there's a juice bar called Jamba juice. Jamba Juice knows that if the temperature gets higher than 95% degrees in an afternoon in the summer they need extra staff since more people will buy smoothies. They have quantified the staff increase required (but they schedule their staff one week in advance and they only get a forecast one day in advance). They use a software package with weather as an input. We believe that many business are right on the cusp of implementing that kind of intelligence. That's where we expect our business to grow.


A roomful of confused–looking journalists is trying to visualize a Twitter network. Their teacher is School of Data "data wrangler" Michael Bauer, whose organization teaches journalists and non–profits basic data skills. At the recent International Journalism Festival, Bauer showed journalists how to analyze Twitter networks using OpenRefine, Gephi, and the Twitter API.

Bauer's route into teaching hacks how to hack data was a circuitous one. He studied medicine and did postdoctoral research on the cardiovascular system, where he discovered his flair for data. Disillusioned with health care, Bauer dropped out to become an activist and hacker and eventually found his way to the School of Data. I asked him about the potential and pitfalls of data analysis for everyone.

Why do you teach data analysis skills to "amateurs"?

We often talk about how the digitization of society allows us to increase participation, but actually it creates new kinds of elites who are able to participate. It opens up the existing elites so you don't have to be an expensive lobbyist or be born in the right family to be involved, but you have to be part of this digital elite which has access to these tools and knows how to use them effectively. It's the same thing with data. If you want to use data effectively to communicate stories or issues, you need to understand the tools. How can we help amateurs to use these tools? Because these are powerful tools.

If you teach basic data skills, is there a danger that people will use them naively?

There is a sort of professional elitism which raises the fear that people might misuse the information. You see this very often if you talk to national bureaus of statistics, for example, who say "We don't give out our data since it might be misused." When the Open Data movement started in the U.K. there was a clause in the agreement to use government data which said that you were not allowed to do anything with it which might criticize the government. When we train people to work with data, we also have to train them how to draw the right conclusions, how to integrate the results. To turn data into information you have to put it into context. So we break it down to the simplest level. What does it mean when you talk about the mean? What does it mean if you talk about average income? Or does it even make sense to talk about the average in this context?

Are there common pitfalls you teach people to avoid?

We frequently talk about correlation–causation. We have this problem in scientific disciplines as well. In Freakonomics, economist Steven D. Levitt talks about how crime rates go down when more police are employed, but what people didn't look at was that this all happened in times of economic growth. We see this in medical science too. There was this idea that because women have estrogen they are protected from heart attacks so you should give estrogen to women after menopause. This was all based on retrospective correlation studies. In the 1990s someone finally did a placebo controlled randomized trial and they discovered that hormone replacement therapy doesn't help at all. In fact it harms the people receiving it by increasing the risk of heart attacks.

How do you avoid this pitfall?

If you don't know and understand the assumptions that your experiment is making, you may end up with something completely wrong. If you leave certain factors out of your model and look at one specific thing, that's the only specific thing you can say something about. There was this wonderful example that came out about how wives of rich men have more orgasms. A University in China got hold of the data for their statistics class and they found that they didn't use the education of the women as a parameter. It turns out that women who are more educated have more orgasms. It had nothing to do with the men.

What are the limitations of a using single form of data?

That's one of the dangers of looking at Twitter data. This is the danger of saying that Twitter is democratizing because everyone has a voice, but not everyone has a voice. Only a small percentage of the population use this service and a way smaller proportion are talking a lot. A lot of them are just reading or retweeting. So we only see a tiny snapshot of what is going on. You don't get a representative population. You get a skew in your population. There was an interesting study on Twitter and politics in Austria which showed that a lot of people on there are professionals and they are there to engage. So it's not a political forum. It's a medium for politicians and people who are around politics to talk about what they are doing.

Any final advice?

Integrate multiple data sources, check your facts, and understand your assumptions.


Charts can help us understand the aggregate but they can also be deeply misleading. Here's how to stop lying with charts, without even knowing it. While it's counterintuitive, charts can actually obscure our understanding of data––a trick Steve Jobs has exploited on stage at least once. Of course, you don't have to be a cunning CEO to misuse charts; in fact, If you have ever used one at all, you probably did so incorrectly, according to visualization architect and interactive news developer Gregor Aisch. Aisch gave a series of workshops at the International Journalism Festival in Italy, which I attended last weekend, including one on basic data visualization guidelines.

"I would distinguish between misuse by accident and on purpose," Aisch says."Misuse on purpose is rare. In the famous 2008 Apple keynote from Steve Jobs , he showed the market share of different smartphone vendors in a 3D pie chart. The Apple slice of the smartphone market, which was one of the smallest, was in front so it appeared bigger."

Aisch explained in his presentation that 3–D pie charts should be avoided at all costs since the perspective distorts the data. What is displayed in front is perceived as more important than what is shown in the background. That 19.5% of market share belonging to Apple takes up 31.5% of the entire area of the pie chart and the angles are also distorted. The data looks completely different when presented in a different order as shown below.

In fact, the humble pie charts turns out to be an unexpected mine field:

"Use pie charts with care, and only to show part of whole relationships. Two is the ideal number of slices, but never show more than five. Don't use pie charts if you want to compare values. Use bar charts instead."

For example, Aisch advises that you don't use pie charts to compare sales from different years but do use to show sales per product line in the current year. You should also ensure that you don't leave out data on part of the whole:

"Use line charts to show time series data. That's simply the best way to show how a variable changes over time. Avoid stacked area charts, they are easily mis–interpreted."

The "I hate stack area charts" post cited in Aisch's talk explains why:

"Orange started out dominating the market, but Blue expanded rapidly and took over. To the unwary, it looks like Green lost a bit of market share. Not nearly as much as Orange, of course, but the green swath certainly gets thinner as we move to the right end of the chart."

In fact the underlying data shows that Green's market share has been increasing, not decreasing. The chart plots the market share vertically, but human beings perceive the thickness of a stream at right angles to its general direction.

Technology companies aren't the only offenders in chart misuse. "Half of the examples in the presentation are from news organizations. Fox News is famous for this," Aisch explains."The emergence of interactive online maps has made map misuse very popular, e.g., election results in the United States where big states like Texas which have small populations are marked red. If you build a map with Google Maps there isn't really a way to get around this problem. But other tools aren't there yet in terms of user interface and you need special skills to use them."

Google Maps also uses the Mercator projection, a method of projecting the sphere of the Earth onto a flat surface, which distorts the size of areas closer to the polar regions so, for example, Greenland looks as large as Africa.

The solution to these problems, according to Aisch, is to build visualization best practices directly into the tool as he does in his own open source visualization tool Datawrapper. "In Datawrapper we set meaningful defaults but also allow you to switch between different rule systems. There's an example for labeling a line chart. There is some advice that Edward Tufte gave in one of his books and different advice from Donna Wong so you can switch between them. We also look at the data so if you visualize a data set which has many rows, then the line chart will display in a different way than if there were just 3 rows."


The rush to "simplify" big data is the source of a lot of reductive thinking about its utility. Data science practitioners have recently been lamenting how the data gold rush is leading to naive practitioners deriving misleading or even downright dangerous conclusions from data.

The Register recently mentioned two trends that may reduce the role of the professional data scientist before the hype has even reached its peak. The first is the embedding of Big Data tech in applications. The other is increased training for existing employees who can benefit from data tools.

"Organizations already have people who know their own data better than mystical data scientists. Learning Hadoop is easier than learning the company's business."

This trend has already taken hold in data visualization, where tools like infogr.am are making it easy for anyone to make a decent–looking infographic from a small data set. But this is exactly the type of thing that has some data scientists worried. Cathy O' Neil (aka MathBabe) has the following to say in a recent post:

"It's tempting to bypass professional data scientists altogether and try to replace them with software. I'm here to say, it's not clear that's possible. Even the simplest algorithm, like k–Nearest Neighbor (k–NN), can be naively misused by someone who doesn't understand it well."

K–nearest neighbors is a method for classifying objects, let's say visitors to your website, by measuring how similar they are to other visitors based on their attributes. A new visitor is assigned a class, e.g., "high spenders," based on the class of its k nearest neighbors, the previous visitors most similar to him. But while the algorithm is simple, selecting the correct settings and knowing that you need to scale feature values (or verifying that you don't have many redundant features) may be less obvious.

You would not necessarily think about this problem if you were just pressing a big button on a dashboard called "k–NN me!"


Here are four problems that typically arise from a lack of scientific rigor in data projects. Anthony Chong, head of optimization at Adaptly, warns us to look out for "science" with no scientific integrity.

Through phony measurement and poor understandings of statistics, we risk creating an industry defined by dubious conclusions and myriad false alarms.... What distinguishes science from conjecture is the scientific method that accompanies it.

Given the extent to which conclusions derived from data will shape our future lives, this is an important issue. Chong gives us four problems that typically arise from a lack of scientific rigor in data projects, but are rarely acknowledged.

  1. Results not transferrable
  2. Experiments not repeatable
  3. Not inferring causation: Chong insists that the only way to infer causation is randomized testing. It can't be done from observational data or by using machine learning tools, which predict correlations with no causal structure.
  4. Poor and statistically insignificant recommendations.

Even when properly rigorous, analysis often leads to nothing at all. From Jim Manzi's 2012 book, Uncontrolled: The Surprising Payoff of Trial–and–Error for Business:

"Google ran approximately 12,000 randomized experiments in 2009, with [only] about 10 percent of these leading to business changes."


Understanding data isn't about your academic abilities–it's about experience. Beau Cronin has some words of encouragement for engineers who specialize in storage and machine learning. Despite all the backend–as–service companies sprouting up, it seems there will always be a place for someone who truly understands the underlying architecture. Via his post at O'Reilly Radar:

I find the database analogy useful here: Developers with only a foggy notion of database implementation routinely benefit from the expertise of the programmers who do understand these systems–i.e., the "professionals." How? Well, decades of experience–and lots of trial and error–have yielded good abstractions in this area.... For ML (machine learning) to have a similarly broad impact, I think the tools need to follow a similar path.


Want to climb the mountain? Start learning about data science here. If you know next to nothing about Big Data tools, HP's Dr. Satwant Kaur's 10 Big data technologies is a good place to start. It contains short descriptions of Big Data infrastructure basics from databases to machine learning tools.

This slide show explains one of the most common technologies in the Big Data world, MapReduce, using fruit while Emcien CEO Radhika Subramanian tells you why not every problem is suitable for its most popular implementation Hadoop.

"Rather than break the data into pieces and store–n–query, organizations need the ability to detect patterns and gain insights from their data. Hadoop destroys the naturally occurring patterns and connections because its functionality is based on breaking up data. The problem is that most organizations don't know that their data can be represented as a graph nor the possibilities that come with leveraging connections within the data."

Efraim Moscovich's Big Data for conventional programmers goes into much more detail on many of the top 10, including code snippets and pros and cons. He also gives a nice summary of the Big Data problem from a developer's point of view.

We have lots of resources (thousands of cheap PCs), but they are very hard to utilize.
We have clusters with more than 10k cores, but it is hard to program 10k concurrent threads.
We have thousands of storage devices, but some may break daily.
We have petabytes of storage, but deployment and management is a big headache.
We have petabytes of data, but analyzing it is difficult.
We have a lot of programming skills, but using them for Big Data processing is not simple.

Infochimps has also created a nice overview of data tools (which features in TechCrunch's top five open–source projects article) and what they are used for.

Finally, GigaOm's programmer's guide to Big Data tools covers an entirely disjointed set of tools weighted towards application analytics and abstraction APIs for data infrastructure like Hadoop.


We're updating this story as news rolls in. Check back soon for more updates.


[Image: Flickr user Shinji WATANABE]

Google Wants Your Location Data To Go Public––Here's Why

$
0
0

Where are you standing or sitting right now––do you know exactly? Your smartphone does, down to a meter or two, even if it's snuggled in the depths of a bag or stuck way down in a jeans pocket. And soon, it may be broadcasting that location in all sorts of publicly viewable places.

And soon enough you're going to be okay with that. Gotten over your NSA–snooping fears yet?

Google's Waze integration today is proof positive that user location data is going to become more public quicker than you may think. Essentially Waze relies on crowdsourced data to refine its maps and to report in real time on traffic events like accidents or jams. If Google's building Waze data into its maps service like this, you can bet your bottom dollar it's going to be persuading as many Google users as it can to either volunteer this sort of data personally, or to passively imply it (via their smartphone location feeds). With enough users taking part in this sort of crowdsourcing effort, real–time traffic alerts and other tricks in Google Maps are going to be self–evidently more useful. So there's a real benefit here for people who opt in, and that'll prompt many more folks to take part––which means voluntarily giving Google data on your real–time location, albeit maybe anonymously.

Google's own John Hanke has also hit the news today with a Google–related location service. In a piece at the Washington Post, Hanke is busy talking up Field Trip, Google's highly location–aware service that will prompt users with an alert "when there's a point of interest nearby, such as a historic statue or a restaurant that got a good review from a local blog." For this to work, if you think about it, a user has to be permanently sharing their location with Google's various services: It's just going to be useless without this system. As Hanke sees it, Google helped teach smartphone users to work out where they were and, as the post puts it, "Google Maps trained us to follow directions. Now its former developer wants us to explore." He says Field Trip doesn't store that data, but instead tallies up when users trigger one of the Field Trip alerts––so it's merely keeping track of where people are setting its alerts off in real time.

Separately Apple's next–gen iPhone operating system is coming up, and iOS 7 has a surprising little secret built right into it: Deep inside the privacy settings is a section labeled "Frequent locations." The settings page alerts iPhone users that you can flip a switch to "allow your iPhone to learn places you frequently visit in order to provide useful location–related information." You can also flip a switch to allow Apple to "use your frequent location information information to improve Maps." Beneath that is a short list of places you've been recently and frequently, and you can even see these on a map.

The revelation about iOS 7 has already stirred up some fuss online, but this overlooks the fact users can simply throw the digital switches on the page and prevent Apple from doing this location–sensing. And once you get over this, it's easy to see that Apple really does have a lot of location–based services planned for its iDevices, perhaps as a smarter shopping service than its own Apple–specific and location–aware Store app. Apple is busy building its own Maps app to rival Google after a prolonged spat, and it's evidently hoping that it can use user data instead of sending out an expensive fleet of tracking cars to drive the world's roads.

And lest you think all this is nonsense, and the average smartphone user wouldn't feel comfortable sharing their real–time location data with a company like Google or Apple for fear of the NSA snooping over their shoulder, it turns out that consumers are pretty happy to do so...as long as there's a reward. A fresh study by Placecast asked 2,000 consumers if––assuming they gave permission––they'd be interested in getting smartphone alerts about new products or sales or restaurants on their device. 45% said they were "somewhat" interested, which is a dramatic upswing from 2009's 26%. Asked if such location alerts would be useful to them, over 75% said they'd be useful, and more relevant than other coupon–based promotions. 87% said location–aware ads would make them aware of locations they'd previously not visited.

That means that for shopping at least consumers would be happy to tell a name like Google or Apple their location...which may also imply they're happy to share more personal preference data in order to receive precisely tailored coupons. And this practice is already underway via the app Foursquare, which was recently revealed to have sold location data to third parties in order to enable location–aware advertisements outside the confines of the Foursquare app.

What does this mean for developers? You need to get savvy quickly on the privacy requirements and permissions needed for you to collect users' location data. And you could probably profit from quickly devising ways to reward users for sharing their location, rather than simply grabbing it from their phones.

[Image via Flickr user: Rob Brewer]

What You Missed: August 21, 2013 Edition

$
0
0

The Perils of Shiny New Objects

It's easy to get caught up in the cool new thing, but don't lose focus. Sudden shifts in direction can derail your startup.


When Apps Modify Behavior

Wish you could have seen the look on your face? This app's got your Frontback.


Reason To Jailbreak?

iOS7 provides a variety of features previously reserved for hacked hardware. Is your phone best left under lock and key?



The Great App.net Mistake

Looking for the most valuable, versatile social networking tool on the planet? It's here, but you're gonna pay for it.



Steve Jobs And AT&T

In 2007, Apple and Cingular entered into an unprecedented revenue sharing agreement. How the deal got done.


The US Patent and Trademark Office confirmed that the tech giant has patented a three finger gesture for a proximity based UI. Not really a shocker . . .



No Matter What

Over one third of readers finish books they don't even like. Give it up, turn the page.



A Device That Tracks Your Mortality

The Endotheliometer can tell you how long you're going to live. Where we're going, we won't need nodes.


Artist Implants RFID Chip In Hand

Anthony Antonellis put art inside himself so he could share it with you. To hell with tattoos.


Jobs

Ashton Kutcher's Steve Jobs just not on target. Again, not a shocker.


Keep Reading To See Curated Reads From Previous Days' News.


August 15, 2013

For Sale

App discovery is a tall obstacle, but there's a solution: Pay to play.



My Mom Called Me Out…

Spend too much time talking and your product is likely to lay an egg. Mama bird just kicked this developer out the nest.


Is Entrepreneurship Too Fashionable?

So you wanna work for yourself, huh? Your career path is so last fiscal quarter.


Expectation Of Privacy

A recent filing by Google claims that "a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties." Oh, so it's that kind of party . . .


How Not To Launch A Site For Women

Contemporary research shows that women have a variety of interests. Wait, really?


August 13, 2013

Tech Hacks Won't Fix Our Surveillance Problems

Some suggest the best way to combat government spying is with even more technology. Why buying a dozen dogs doesn't solve your cat problem.


Google Fiber Banning "Servers"

Google's ISP users agreement dances around the tech giant's own basic philosophy. You're served.


Bleak Future For Apple?

Steve Jobs left Apple twice, this time for good. Larry Ellison explains hot the tech giant couldn't live with the cofounder, but can't live without him.


To The Creators

For what seems like forever, you've poured your own blood, sweat and tears into your product. So, what took you so long?


Use Encryption A Lot More

Intelligence agencies have access to all unencrypted communications on the Internet. Why are journalists taking this information so lightly?


August 7, 2013

The Anti–Apple

Conventional wisdom suggests Amazon.com serves as ying to Apple's yang. Horace Dediu's looking for an orange.


CE uh–O

Executives often live two lives. Can they keep their balance?


Reflections From A Manager

Looking for tips on how to be a better boss? Try the mirror.


Ego Depletion

When it comes to successful development, your user comes first. So get over yourself.


Bezos+Washington Post=Optimism

Jeff Bezos is successful and the Capital's paper isn't. There's room for improvement. Do the math.


Finding Your Purpose

Establishing why you want to do something and saying so up front takes [g]uts. Grow a pair.


New York City By Drone

Recently spurned by opera, a Phantom scaled some of the city's most beautiful buildings shooting stunning footage. Who is that masked man?


August 6, 2013

Objectively Stylish

Here are the New York Times' style guidelines––for Objective–C. Know the rules. It's the only way to bend them.


Open Thank–You To Open Source

Developers owe the builders of open source software big time. Take a letter!


You Literally Represent Everything Wrong With The World

Hey, you know what would make this movie better? The movie.


What Customers Hate About Social Brands

What drives customers the craziest? Your spelling. Also, your sense of humor.


Practice Makes Efficiency, Not Perfection

Researchers believe that repetition of tasks makes for simpler neural pathways and a less energy–indulgent brain. Monkey see, monkey do, monkey do better next time.


Tweets Really Can Boost Ratings

Nielsen determined that Twitter holds statistically significant sway in viewership. Networks, you're on notice.


Surveillance: The Enemy of Innovation

As technology grows more voyeuristic, public and private surveillance permeate our lives. Long live the status quo!


Jeff Bezos's Most Recent "Post"

Why did Jeff Bezos buy the Capital's newspaper? A source close to us has no idea.


NSA Surveillance And Mission Creep

Agencies outside the NSA are requesting surveillance information to use in their own, unrelated investigations. Oh, and the DEA's pants are on fire.


August 5, 2013

The Idea Maze

Founders should be students of their game, so take notes, there's a test. Do well, get cheese.


Dubstep

Music keeps societies' rhythms. Bassnectar gets us up to speed.


Working In The Shed

Attention spans are short, but Matt Gemmell knows a trick. Hocus focus.


Why Mobile Web Apps Are Slow

Mobile app speed is enough to drive you up the wall. Drew Crawford takes your brain for a spin.


Hard Work Isn't Always Enough

Think all it takes is dedication and follow–through to get better? Take a lap.


Why Quartz Values Email Newsletters

MailChimp works. Now dance, monkey!


Senators Can't Agree On Who's A Journalist

Senators don't know. But they're fleshing it out.


Words Are Hooks, Words Are Levers

When it comes to kicking words around, consider the impact: "Turf" toe.


July 31, 2013

What I Learned Writing 30,000 Words

Branding ain't easy. Unless, of course, you're motivated. Then it's a piece of cake.


Collaboration Doesn't Work

How do you increase productivity without ostracizing your employees? Stop calling meetings. And don't say the C–Word.


You Are Building A Brand – Whether You Realize It Or Not

Marc Barros doesn't think you should outsource your branding. After all, if they build it, who will come?


The Much Pricier Minnowboard

Intel's new minimalist PC may cost a fortune compared to its British counterpart, but I/O performance and expansion are as easy as Raspberry Pi.


Ghost[buster] Of Computer Science Past

Programmers often turn a cold shoulder to the greats who came before them, dooming them to the same frigid, digital landscape developed years ago. Don't forget your booties . . .


Why the Internet Needs Cognitive Protocols

As Internet traffic multiplies exponentially, network infrastructures will no longer be sufficient by the end of the decade. Antonio Liotta's getting nervous.


Slow Ideas

Some of the best ideas in human history are the last to catch on, but why? This renaissance surgeon reminds us the road to nowhere is paved with good inventions.


July 29, 2013

Down With Lifehacking!

When opportunity knocks, tell it the door's open. Take it easy.


Time Is Right For Video Initiatives

The Washington Post's "PostTV" brings online video content to readers everywhere. Tune into the noob–tube.


How To Hire The Best Designer For Your Team

Finding the right designer is about as easy as hunting unicorn. Braden Kowitz details a most dangerous game.


Silence Is Golden

Sometimes less is more. So shut up.


Understanding Google

You put the right one in, you get the right one out. Google gets horizontal.


3 Reasons To Write

Everyone wants to be a writer. So why doesn't anyone write?


3 Ways Running A Business Makes You A Better PM

Kenton Kivetsu knows what it takes to excel in your business: Know it like the back of your hand.


Why We're Doing Things That Don't Scale

Automation limits your company's most valuable, human resources. Jason Fried tips the scales.


After Award, Engineer Says NSA Shouldn't Exist

The NSA handed out its first "Best Scientific Cybersecurity Paper" award last week to a most ungracious recipient. Joseph Bonneau bites the hand that feeds him.


Getting Back Your Series A Mojo

Mark Suster likes entrepreneurs with something to prove. After all, if something's broke, effing fix it.


July 24, 2013

You Can't Fire Your Investors

You can pick an investor, and you can pick what your investor knows. But you can't pick your investor's nose.


Great Products Have Stories

Marketing 101: Grabbing people's attention from the front of the class can be tricky. Unless, of course, it's show and tell.


Who Are You?

As an entrepreneur, it's imperative you get to know yourself. Have a seat on the couch.


Twitter Is Gaining In Popularity

Usage rates are up in virtually every media network over the last decade. Get Social.


Why Stylus Fit Better My Needs

New languages ain't easy. Stylus helps out with CSS syntax.


Roll Your Own Summer Coding Camp

Learning to code can be intimidating. Here's a way to teach yourself a new language, within your own specific time frame. Don't forget the marshmallows.


Data Compression Proxy

Google is rolling out a mobile browser that cuts data usage in half. You down with DCP?


NSA Implements Two–Man Control for Sysadmins

The NSA has implemented a brave new security policy to tighten things up: The Buddy System.


Religion And Our Evolution

Don't send a priest to do an anthropologist's job. Cadell Last examines religion in a contemporary world.


July 23, 2013

The Missing Step In Lean Startup Methodology

So your product solves all life's problems. Why isn't anyone using it?


Switching From iOS To Android

As the Android platform matures, people are finding iOS more and more restrictive. Why this professor pimped his phone.


Love What You Build

Not sold on your own product? Then why would anyone else buy it?


There Is No Application For Entrepreneurship

Kevin Rustagi has some advice: Stop asking permission to be successful. This is America, for crying out loud.


Victory Lap For Ask Patents

The boundaries of patent law are as blurry as any. This entrepreneur brings intellectual property rights into focus.


NFC–Enabled Jewelry

NFC technology has Europe and China under its spell. Here's one ring to fool them all.


Apple Flat, Google 3–D?

Google recently announced a drastically different design approach. It's different. But why?


July 22, 2013

Apple Acknowledges Hack

Apple says they're not entirely sure if any confidential information fell into the wrong hands during Thursday'ss security breach. Wait, which are the wrong hands again?


The Rebirth Of Windows Mobile

Windows missed the boat on tablets. Jean–Louis Gassée plots Steve Ballmer's new course.


Motorola X Leak

The phone maker's got a rat. Seth Weintraub's got the cheese.


How Yield Will Transform Node.js

Asynchronous code reads like a traffic jam. Alex McCaw breaks down how Yield can get things in sync.


Your Startup's Office Is Missing A Room

The most important aspect of your product is how it's put to work. Tomasz Tunguz fights for the user.


Parsing The $900M Surface RT Writedown

Microsoft announced a massive revaluation of their inventory Thursday. Alex Wilhelm is at a loss.


Negative Space In Design Terms

Is negative space an important design tool? Christie Johann thinks so. In fact, she's positive.


Why Most Apps Are Free

People will put up with anything, and Android users are cheap. Mary Ellen Gordon applies Flurry Analytics to app pricing.


Downward, Ho!

Can't see the forecast for the trends: Nathan Kontny explains how losing faith in the face of obstacles is no way to grow a business.


News Orgs Developing "Digestible Digital Weeklies"

Dailies and monthlies can be hard to swallow. How some magazines are cooking up something just right.


Explore Local Politics With Network Graphs

This journalism professor hates politics. Listen to him.


July 18, 2013

Apple, Google Join Forces, Request NSA Data Be Made Public

Sixty–three recently embarrassed tech companies are calling for more transparency in surveillance requests. What are the chances the NSA sees right through them?


DHS Puts Its Head In The Sand

Bruce Schneier came across a DHS memo detailing a strange new security policy: The Honor System.


iWatch's Novelty Emerges

Apple is putting together a team of experts in development of a new, fitness–centered piece of wearable tech. It's all in the wrist.


A Mathematical Look At The Arab Spring

Youth bulges beget political unrest. Or do they? Get a job, hippie!


Runaway Heron

Germany's The Bild posted video footage of a 2010 runway accident involving the popular drone. Is there a pilot on board?


What Journalists Need To Know About Responsive Design

There are seemingly endless formats for site design across platforms. Casey Frechette reminds us of a "core Web principle": It's all in the way you look at it.


July 17, 2013

The Creepy Practice Of Undersea Cable Tapping

The government has been monitoring underwater communications since the Cold War, but how much can they really dig up? Olga Khazan mines the abyss.


The Three Phases Of Startup Sales

Sales strategies must evolve with a business. Tomasz Tunguz lays down the steps to get you to the top.


Ring The Freaking Cash Register

Mark Suster has seen the cash dry up within many well–funded new startups. His advice? Put money in the till.


How Google Picked "OK, Glass"

How did Google settle on their activation phrase for their new wearable tech? It's the blind leading the blind, only now they can see.


Poor Quality Will Kill You

Startups fail for all kinds of reasons, but one thing is for sure: Shoddy product is not an option.


5 Things Journalists And Musicians Have In Common

Tunes and news have changed drastically in the last 20 years. Angela Washeck reports on how the two industries evolved in harmony.


NBCNews Still Finding Its Footing

One year post–split with Microsoft, NBCNews still looking for its legs. Jeff John Roberts maps out the network's quest for solid ground.


July 16, 2013

Disgruntled Google Users Live Low–Google Lifestyle

Sam Whited and Adam Wilcox have grown tired of the Google's ever–changing landscape, so they're cutting it out. Here's a peak at their new preferences.


Hackers Turn Verizon Box Into Spy Tool

The cell giant's network extender can be modified into a small transmission tower capable of picking up all cell traffic in its range. Someone alert the NSA . . .


Flexible Batteries That Could Power iWatch

ProLogium has developed new ceramic lithium batteries the bend the rules of smartwatch–making. Will Apple and the Taiwanese company band together?


Opbeat Nets $2.7M For "Web Ops" Control Center

The Danish startup is committed to providing development support to other startups. If it's broke, they'll fix it. Get to work.


How to Solve the Biggest Frustration Marketers Have

Social Media lacks reliable ROI measures, and it drives marketers up the wall. Mark Suster thinks it's time they took awe.sm for a ride.


Apple Pitches Ad–Skipping For New TV Service

Apple wants users to be able to skip ads during television programs, but still compensate the advertisers. But the service will come at a premium.


Choice Of A Rightly Paranoid Generation

Though not without faults of its own, Bitmessage offers users concerned with their privacy some peace of mind. How this hacker favorite might go mainstream.


How To Be A Better Writer: Fail Like A Comedian

Nathan Kontny knows what it takes to get better: Practice. Wait, that's not funny.


July 15, 2013

Microsoft Pays First "Bug Bounty"

Having long resisted bounty programs, Microsoft is finally putting their money on the line. Make check payable to "Google."


Apple Should Protect Us From Porn

A Tennessee lawyer filed suit against Apple claiming damages from devices that can display porn, and his own subsequent addiction. The first step is admitting this is someone else's problem.


Microsoft Reorg: The Missing Answer

Microsoft announced last week that they will reorganize their company's structure. Apple may not have fallen far, but this tree wants it back.


Not A Geek

Does decades of developing a geek make? Matt Gemmell waxes existential.


Data Storage That Could Outlast The Human Race

A million years from now and at 1000 C, 180 TB of information will still be readable on a single disk. And still, the glass is only half full.


The Complete Guide To Hashtag Etiquette

Hashtags gather people together for conversation. Shea Bennett reminds you to mind your manners. Pound it out.


How Intellectual Property Reinforces Inequality

Myriad Genetics' recent claim to DNA ownership looks like an unethical cash grab aimed at exploiting inequalities in the American health care system. Joseph E. Stiglitz reminds us everything that shines ain't patent leather.


Tiny Robotic Cubes Could Rule The Solar System

Researchers at the University of Michigan launched a kickstarter aimed at funding revolutionary new space probes they believe can be sent millions of miles into space. And they're no bigger than a breadbox.


Do Things That Don't Scale

Startups succeed for all kinds of reasons, most of them hard work. Don't be a quitter.


July 11, 2013

"What Running Has Taught Me About Entrepreneurship"

Adii Pienaar found parallels between exercise and enterprise. Here's a game plan to help you achieve your personal best.


"IFTTT: A Different Kind Of iOS Automation"

Federico Viticci and IFTTT separated ages ago. Can they rekindle the magic?


"Chromebooks Exploding!"

As laptop sales plummet, Google's hardware has the $300 and under–PC market on the defensive. Chance Miller has the intel.


"The New York Times Is Building A New TimesMachine"

The next generation online archive features increased functionality changing the way we view the past. But it's the technology behind it that's really in flux.


"Wired's Profile Leads With Wardrobe"

Cade Metz led with three paragraphs on fashion in his recent piece on Google engineer Melody Meckfessel. How progressive!


July 10, 2013

Apple's Plans For IGZO Display Integration

Apple has plans for IGZO displays in iPads and iPhones, we know. But are there plans for MacBooks?! Lighten up.


What Samsung's New U.S. Headquarters Says

The new LEED Gold–rated building in San Jose speaks volumes about the tech giant. Alexis Madrigal translates.


Build Brand Awareness First – Distribution Second

Many startups establish presence before demand. Marc Barros thinks that's back asswards.


Gaining Mobile Traction Is Harder Than Ever

The mobile marketing landscape has changed. Andrew Chen tracks the industry's evolution.


Post–Reader RSS Subscriber Counts

AOL Reader, Digg Reader, and The Old Reader don't publish subscription stats. Marco Arment wants to change that. Nothing personal . . .


A Refresher Course In Empathy

Customer support systems often lose sight of what's important. Emily Wilder wants things back on track. Where there's a skill, there's a way.


Dropbox Blows It Up

Dropbox already connects you to your stuff. What if they connected your stuff to your stuff? You're gonna need a storage unit.


July 9, 2013

Turn Anything Into a Drone

Sure, your bike has wheels . . . but can it fly? 3–D drone home.


Effecting Change From The Outside

Marco Arment believes Apple hears users' complaints and uses them to effect change. The creator of Instapaper encourages everyone to use their words.


The Dangers Of Beating Your Kickstarter Goal

Tim Schafer and Ron Gilbert are almost a year late with their much–anticipated new adventure game. They ran into a notoriously BIG problem . . .


This Is Not a Test

America's Emergency Broadcast Systems are vulnerable to attack. Steve Wilkos is Nostradamus.


Former Windows Chief Explains Why It's So Hard To Go Cross–Platform

As platforms develop, bridging the gaps between them becomes increasingly difficult. Steven Sinofsky articulates the communication breakdown.


The Computers That Run The Stock Market

If you play the market, Citadel has likely handled your money. Meet the machines behind the Machine.


July 8, 2013

Modeling How Programmers Read Code

Michael Hansen shot video demonstrating how varying levels of programming skill affect a coder's pattern recognition while reading script. This just in: Practice makes perfect.


Technology Workers Are Really Young

PayScale took it upon itself to determine the median age of workers in technology. The results, next time on, "The Young and the Breastless."


Everything Gmail Knows About You And Your Friends

Researchers at MIT are mapping people's social lives by way of their email accounts. Stick your nodes in other people's business.


NSA Collaborated With Israel To Write Stuxnet Virus

Edward Snowden says that intelligence agencies dig deeper than we know . . . and they're working together. What else can he see with "Five Eyes?"


Facebook Begins Graph Search Rollout

Facebook announced the much–anticipated search function will launch this week with improvements upon beta. What all can we expect from the new tool?


iOS 7's Design Bold, Flawed

Christa Mrgan illustrates Apple's new 2.5–D design approach. Might wanna grab your glasses.


What Kind Of Crazy Scheme Is Motorola Hatching?

Google and Motorola are working on "the first smartphone that you can design yourself," but what does that mean? Smartphone buffet. Get stuff[ed].


UI Principles For Great Interaction Design

Interaction Design is a relatively new field and not everyone knows it well. Christian Vasile touches on the basics and lays down a working foundation for rest of us.


Designing App Store "Screenshots"

Travis Jeffery has some advice for iOS developers: Stop taking screenshots, start making them.


Apple, Google And The Failure Of Android's Open

Think Open Source is winning? Daniel Eran Dilger will be the judge of that. Case closed.


July 2, 2013

Build It, But They Won't Come

Too often developers value product over marketing, decreasing their chances of success. Andrew Dumont looks to level the playing field.


"Pick The Brains" Of Busy People

People with packed schedules aren't easy to pin down, especially for advice. Wade Foster plays to their egos.


iOS 7 GUI PSD

Ready to familiarize yourself with iOS 7's graphical user interface? So is Mark Petherbridge and he's got the Photoshop document to prove it.


Google Glass Updated

Google announced major software updates coming for its wearable device. OK Glass, whaddya got?


Forthcoming "Cheap" iPhone Potentially Hideous

Leaks suggest the newer, less expensive iPhone is manufactured in Candyland. Christopher Mims takes a lick.


How Facebook Threatens HP, Cisco With "Vanity Free" Servers

Facebook's DIY lab poses real questions regarding the viability of open source hardware. Efficiency is the name of the game . . . and what savings!


July 1, 2013

How Apple's iLife, iWork, iBooks Could Look

iOS 7 will change just about everything. Michael Steeber takes a crack at apps' new aesthetic.


HP Smartphone In The Works

Two years after shutting down its mobile division, Hewlett–Packard is back in the game. Just don't ask for a timetable. Better late than never . . .


Most Willing To Exchange Private Social Data For Better Online Experience

More than half the social media users in the UK say they are willing to share private information for a more personalized web experience. England as an open book? Hey, a deal's a deal.


Anatomy Of A Tweet

140 characters are worth a thousand words. Shea Bennett explores the makeup of the world's favorite micro–blogger.


Startup Investing Trends

The small business landscape has changed drastically in the last 25 years. So will investors make more money moving forward or less? Paul Graham says more. Lots more.


Data Journalism Is Improving – Fast

The Data Journalism awards showed that the genre is gaining traction. Frederic Filloux shares three personal insights into the ever–changing DJ landscape.


Google 'Working On Videogame Console'

Wearable tech may not be the only advent in the search giant's future. Google's got game.


Wibbitz Could Wipe Out Publishers' Video Businesses

Paul Armstrong details the newest player in news. Small markets just got a whole lot bigger.


June 27, 2013

An Open Letter To Apple Re: Motion Sickness

Craig Grannell is sick to his stomach at the thought of more full–screen transitions. But he can't be the only one. Anyone have a barf bag?


Women in Tech

Women and minorities are underrepresented in tech. But there are two crowdfunding projects trying to change all that. Cast your vote.


Pre–9/11 NSA Thinking

Fifteen years ago, the NSA assured the American people that our security and privacy were their top priority. Bruce Schneier takes a look at what changed.


WikiLeaks Volunteer Paid FBI Informant

Sigurdur "Siggi" Thordarson hid inside WikiLeaks as an FBI informant for three months and $5000. Secrets, secrets are no fun . . . and they don't pay for shit either.


PayPal/SETI To Create Interplanetary Payment System

Astronauts have long felt the need for intergalactic auto–autopayment options, but soon they might pay bills from space. Quick, phone home.


June 26, 2013


Make Better Business Phone Calls

Mark Suster knows how to build business relationships, and not with emails. The entrepreneur–turned–venture capitalist lists seven ways to improve your asking strategies. It's face time.



Premium Pricing, Exclusivity & A Higher Demand

Adii Pienaar employs cognitive dissonance in defense of PublicBeta's premium pricing structure. Either it makes you money or saves you money, but no matter what, it costs you money. You decide.


Can Apple Read Your iMessages?

Apple claims it doesn't share your information with the government. Cryptographer Matthew Green reveals two truths about iMessage's user security that might surprise you. Say metadata decryption 10 times fast.


eBay Builds New Engine, New Identity

In 2008, eBay found itself lost within the next generation of search engines. Marcus Wohlsen explains how chief technology officer Mark Carges took action, forsook auction.


Inside YouTube's Master Plan To Kill Lag Dead

YouTube recognizes the importance of progress bars, so they're reinventing the wheel. Instant gratification, here we come! It's the best thing since . . . how does that one go again?


Genes And Memes

Cadell Last draws on the parallels between genetic and memetic evolution. Is Richard Dawkins the missing link?


Why You Can't Find A Technical Cofounder

Guest writer Elizabeth Yin lists the things developers look for most in a technical cofounder, and a number of ways to gain traction. Remember the three things that matter least in tech startups: Location, location, location.


June 20, 2013

Something Old, Something New

Digg is slated to replace Google Reader by July 1. And while that may not be nearly enough time for some, Andrew McLaughlin keeps his promises . . . with gusto.


Check Out Tim Bucher's Secret Startup

The ex–lead engineer at Apple is pillaging tech giants for employees at Black Pearl Systems. Meet the internet's newest band of pirates. Argh!


"Steve Jobs Once Wanted To Hire Me"

Richard Sapper remembers his career in design, condemns commercialism, and reveals he once forsook geek Jesus. #OMGY?!


Does NDA Still Make Sense?

The first rule of nondisclosure is: Shut Your F#@%ing Mouth. But seriously, speak up.


Traveling, Writing, And Programming (2011)

Alex Maccaw spent almost an entire year abroad, killing it. Get ready . . . Jetset . . . Go!

Wrong Need Not Apply

R. E. Warner dislikes critiques . . . reading them, anyway. The coder–poet turns two wrongs righteous.


Schneier On Security

Scott Adams thinks we'll someday identify sociopaths by way of their Facebook usage patterns; Bruce Schneier thinks he's nuts.


Want To Work At Twitter?

Buster Benson's been with Twitter almost a year now. This is what it sounds like when ducks tweet.


June 19, 2013

Want To Try iOS 7 Without Bricking Your Phone?

There's a hassle–free introduction to iOS 7 available online. And while it may not be the smoothest transition, it gets the job done. Recumbo shows us what's what.


Moving The Web Forward Together

The open web is expanding evermore toward new frontiers. Chris Webb explains the necessity of new features, innovation, and trail blazing.


Asynchronous UIs––The Future Of Web User Interfaces

Alex Maccaw debunks request–response and outlines his vision for the future of user interface. Death to the spinning lollipop of death!


"Human Supremacists"

"The Superior Human?" questions whether or not human beings are superior to all other life forms. Humans: A) Rule; B) Are a disease; C) Abhor a Vacuum; D) Ain't so great after all. Cadell Last examines all of the above.


Wrong

Jony Ive's iOS 7 icon grid has supplied new inroads for design–related hater traffic. Neven Mrgan breaks down the gridlock.


On Discipline

Michael Heilemann declares iOS 7 the Alpha and Omega of modern operating systems. He's also pretty happy it's in beta . . .


Startup Beats Rivals, Builds "DVR For Everything"

Nate Weiner pasted Pocket together from scraps, but he attracted some vocal detractors. Stop copying!


Cat–Like Robot Runs Like The Wind

École Polytechnique Fédérale de Lausanne (EPFL) developed the world's fastest quadruped robot and hopes the Cheetah–cub stimulates search–and–rescue–related progress in robotics. Now if they'd only get to work on a bionic St. Bernard and some digital brandy . . .


June 18, 2013

Developer Finds Video Evidence In Instagram Code

Tom Waddington did some digging and found a mute button programmed into the popular photo sharing app. But don't get your hopes up, Facebook is likely to stay mum at Thursday's event.


Popular Ad Blocker Helps Ad Industry

Ghostery shares data with the same industry its users avoid at all cost. Scott Meyer explains how he keeps his consumers close, and his customers closer.


If You Could Eat Only One Thing …

Elizabeth Preston breaks down the latest food fad. Hint: It ain't people.


Humans Immortal In 20 Years, Says Google Engineer

Ray Kurzweil believes medical advances in the last 1000 years suggest that humans may outpace organic decay. Someone alert the Social Security Administration . . . whenever.


Get Rid of the App Store's "Top" Lists

Marco Arment thinks "Top" lists suppress app store progress, and he's got a solution: Grease creative palms, not squeaky wheels.


The NSA story isn't "journalistic malfeasance"

Mathew Ingram breaks down both sides of the most recent ethics debate in journalism. Conclusion: We're all dirty.


June 17, 2013

Why Is Exercise Such a Chore?

Daniel Lieberman tells Anil Ananthaswamy how the human body evolved for long–distance running. This guy's got his head on straight.


Sexism Still A Problem At E3

The Penny Arcade Expo banned booth babes, but E3 is still behind the curve. Gamer Anonymous highlights the first step to recovery.


President Orders Spectrum Open For Wireless Broadband

Obama promises more Internet for the people. But how will the G–Men free up the bandwidth?


Anthony Goubard Built Joeffice In 30 Days

The Netherlands–based developer explains how Java is a part of a complete office suite . . . you know, when it's done.


Real Answers or Fake Questions From Xbox One Document?

Owen Good analyzes some frequently spread rumors about Microsoft's new Xbox One. Something doesn't add up . . .


Designer David Wright Has Just One Favor To Ask

NPR's latest hotshot developer is leaving news for Twitter. Wright tells Nieman Journalism Lab why design is the most prominent challenge to modern journalism. The solution is simple.


Reporting Or Illegal Hacking?

The Computer Fraud and Abuse Act makes life a living hell for whistle–blowers and highlights some glaring holes in the justice system. Just whom are we locking up?


June 14, 2013

The Most Effective Price Discovery Question for Your Startup

How much should your product cost? Ask your customers. Tomasz Tunguz outlines the importance of comparative pricing questions. He's always right.


Why The Hell Am I Building A Product With A Tiny Market?

Developing a product for a smaller market minimizes risk, but at what cost? Serge Toarca lists the pros and cons of niche programming.


8 Months In Microsoft, I Learned These

School and the real world just ain't the same. The recently matriculated Ahmet Alp Balkan tells it like it is.


The Code You Don't Write

Measure yourself by the work you don't do. Tim Evans–Ariyeh works smart, not hard.


iOS Assembly Tutorial

Matt Galloway breaks down what holds machine code together and teaches us to speak this intuitive language.


iOS 7 Icon Grid

John Marstall outlines Apple's new icon design grid. But don't think for one second he likes it.


Apple Uploads Its "Mission Statement" Videos To YouTube

Apple released a slew of new videos revealing to the world what they're all about. 9to5Mac takes a look at the new direction.


How Three Guys Rebuilt The Foundation Of Facebook

Facebook rode hip–hop to the tip–top. Cade Metz explains how the world's most prominent social network continues growing and preserves "The Hacker Way."


Consumer Vs. Enterprise Startups

Bijan Sabet outlines the difference in funding two types of startups and reveals his love affair with the consumer world. Maybe we're not just dreamers after all . . .


How To Build A Solid Product Roadmap

Outlining a plan doesn't mean it will execute properly, but it sure helps. Kenton Kivestu nails down the framework necessary in any product development process.


Getting Swoll

Travis Herrick works out, and he knows why: Nothing worth building comes easy, not even bodies.


Google Accused Of Hypocrisy Over Google Glass

Google Glass might be the most invasive piece of consumer technology ever, and Google knows it. Time to look in the mirror . . .


Stop Worrying About The Death Of Showrooming

Physical stores may be going the way of the dinosaur, but showrooming is by no means extinct. Casey Johnston shines some light on a new online model. Might wanna try on some sunglasses...


First look at Apple's U.S.–manufactured Mac Pro

Apple unveiled the new Mac pro at the 2013 WWDC yesterday. Here's a first look at the cylindrical powerhouse.


Will Apple Allow Third–Party Software Keyboards In iOS 7?

Rumor has it developers will be able to program their own keyboards in the new iOS. Can it be true?


Apple's New Promises To News Orgs

Apple announced a number of new products yesterday at the WWDC, not the least of which is iOS7. Joshua Benton breaks down the tech giant's big day.


Soon You'll Be Able To Read iBooks On Your Mac

iBooks are now compatible with Apple's new Mavericks OS. Read up. Take notes.


Google Reader's dead and gone, but Google Glass is on the case. Applied Analog is interfacing your face.


Instafeed Lets You Curate Instagram Like RSS Feeds

The new app supplements Instagram, curating your feed by topic. But are they really in sync?


WikiLeaks Is More Important Than You Think

The NSA is gleaning information off of some of the biggest players on the web. Matthew Ingram explains why having an independent leaks repository is invaluable.


Robots Will Leave Us All Jobless

Technological progress increases productivity across the board. But are those same advances costing people their jobs? Illah Nourbakhsh discusses the inconvenient truth surrounding the rise of machines.


Cops can't figure out the latest technology in car theft, and neither can automakers. Can signal repeaters used in conjunction with keys in close proximity be the answer? Repeat . . . Police stumped.


Your Information Is Fair Game For Everyone

The U.S. government monitors our every digital move. The NSA compiles vast databases of emails, calls, and browsing history. So why does China get all the credit?


The iOS and Android Two–Horse Race: A Deeper Look into Market Share

Apple and Google have long vied for control of the mobile marketshare. Mary Ellen Gordon breaks down the race and explains the difference between devie– and app–share. Win, place, and show us the analytics.


How Facebook's Entity Graph Evolved Into Graph Search

Harrison Weber explains how Facebook uses structured data to target users with ads so that they can target their exes. Stalkers . . .


You Won't Finish This Article

People just don't read like they used to. Farhad Manjoo breaks down the analytics of the ever–shortening Internet attention span. Wait . . . what?


Why Google Reader Really Got the Axe

Google sentenced its RSS reader to death. Christina Boddington outlines the deliberations, the verdict, and this particular trial's outcome.


The Secret Worlds Inside Our Computers

Ever wonder what's going on inside your computer? Photographer Mark Crummett employs his world lass diorama skills to open up a whole new world in his new show "Ghosts in the Machine."


Robotic Street Sign "Points" In The Right Direction

Brooklyn's Breakfast invented an interactive street sign. Drawing from a user interface, social media, and even RSS feeds, Points can show you the way to your heart's desire. Now, where the hell is Wall–Drug?


Your Ego, Your Product, And The Process

Too often, our process gets mucked up on account of feelings. Cap Watkins explains how letting go and opening up during the earlier stages of design can alleviate creative pains.


The Dawn Of Voice–To–Text

Carpal tunnel got you down? As the sun sets on hand–coding, Tomasz Tunguz explains the not–so–subtle nuances of dictation, and gives his wrists a well–deserved break.


11 Years Of WWDC Banners

The world's most popular developers' conference sold out in two hours this year. Here's a look at the banners from years past. Nostalgia!


Express.js And Node.js As A Prototyping Medium

Express.js and Node.js can intimidate first–timers. Fret not. Chris Webb shares a list of helpful hints to get you started and guide you along.


In–Store iPhone Screen Replacement And The Machine Making It Possible

Apple has announced a new service replacing damaged iPhone screens in–house for $149. The price is right, but what does it mean for AppleCare?


The Future of Shopping

Google takes aim at Amazon's Prime subscription with Shopping Express. From cosmetics to toys, they deliver anything within a few hours of your order. No toilet paper? Keep your seat. They'll be right over.


The Next Big Thing In Gesture Control

Thalmic Labs raised $14.5 million for its MYO Armband. With over 30,000 pre–orders already, the Canadian startup is poised to usher in a new era of touchless computing.


Who Is The Tesla Motors Of The Media Industry?

Some suggest that media is going the way of the American automobile. Matthew Ingram explains who's on cruise control, and who's bucking the motor trend.


Finding Good Ideas Through The "McDonald's Theory"

Creative block? Try Jason Jones's own intellectual Drano: Terrible Suggestions.


Why Are Developers Such Cheap Bastards?

Developers notoriously reject paying for necessary technology. In fact, many of them waste weeks writing their own, bug–riddled programs. But they will pay for services, like the cloud. What's the deal?


The Banality Of "Don't Be Evil"

Beneath Google's do–gooder facade lies something more akin to a Heart of Darkness. The tech giant got into bed with Washington, and now they're working together to implement the West's next–generation, imperialist status quo. But don't look, they're watching.


The Straight Dope On United States vs. Apple

The publishing houses have all reached settlements, but Apple's still on the hook. Here's a look at the core issues driving the government's case.


Everything You Know About Kickstarter Is Wrong

The crowd–funding site has never really been about technology, but new requirements make it even harder to raise money for gadgets. Artists aside, it's time to look elsewhere for cash.


A Real Plan To Fix Windows 8

Microsoft's "integrated" operating system never worked well for tablets or PCs. How InfoWorld aims to dissolve this unholy union and salvage what should be a healthy, digital relationship.


Why The Hell Does Clear For iOS Use iCloud Sync?

Milen explains why Clear and iCloud make natural bedfellows, and how they fell in with each other in the first place.


Here's What's Missing From iOS Now

FanGirls compiled a miscellaneous iOS wish list for all the good girls (and boys) to see. From Wi–Fi and Bluetooth to file systems and bugs, here are eight reasonable expectations for the future of iOS.


Startups, Growth, and the Rule of 72

David Lee uses Paul Graham's essay "Startup=Growth" as a jumping off point to explain the metrics of growth. And don't worry if you've lost your mathematical touch, he has too.


"Starbucks Of Weed," Brought To You By An Ex–Microsoft Executive

Andy Cush explains how Diego Pellicer plans to become America's first real marijuana chain. They're looking for $10 million in investments to expand into three new states. They must be high . . .


SUPER–CHEAP 3D–PRINTER COULD SHIP THIS YEAR

Pirate 3D is bringing the revolution to your doorstep, and for a heck of a lot cheaper than their competitors. Their goal? Get these things out to kids and see what prints.


A Story About The Early Days Of Medium

How do you create Medium and change publishing forever? By first gaining audience with the man behind Twitter, duh. And a couple other Obvious ones . . .


Why Google Is Saying Adios To Some Of Its Most Ardent App Developers

Google is laying off its App developers in Argentina on account of a logistical banking nightmare. Really, it's just paperwork. In a related story, interest in Google's Internship remains underwhelming.


This Guy Screencaps Videos Of Malware At Work

Daniel White infects old hardware with contemporary viruses for educational purposes. But don't Worry, he's not contagious.


The Rise Of Amateurs Capturing Events

You've met Big Brother, now meet "Little Brother." How the same technological developments advancing institutional surveillance are ushering in a new era of civilian watchdogs.


Three Mistakes Web Designers Make Over And Over Again

Doomed to repeat ourselves? Not so fast. Nathan Kontny shares a short list of some things he thinks to avoid.


Not So Anonymous: Bitcoin Exchange Mt. Gox Tightens Identity Requirement

Can we see some identification? Mt. Gox announces new verification procedures in response to a recent money laundering investigation into one of its competitors. And they've got their own legal problems, too .


The Wall Street Journal Plans A Social Network

The Wall Street Journal is working to connect everyone invested in the Dow Jones on a more private, financial network with chat. Suddenly, Bloomberg's got some competition.


Tumblr Adds Sponsored Posts, And The Grumbles Begin

Users are responding poorly to Yahoo adding advertising to Tumblr. Can sponsored stories save the day?


Sci–Fi Short Story, Written As A Twitter Bug Report

Anonymous man's @timebot tweets from the future, past, and present at once. But what can we learn given Twitter's rate limits. The end is nigh.


Thoughts On Source Code Typography

Developers read code more than anyone. David Bryant Copeland argues aesthetic in addition to content, and the importance of typography and readability of source code.


Marco Arment Sells "The Magazine" To Its Editor

Glenn Fleishman to helm progressive Instapaper as early as Saturday. It's business as usual, but with podcasts.


Mary Meeker's Internet Trends Report Is 117 Slides Long

The Kleiner Perkins Caufield & Byers partner will release her findings at the upcoming D11 conference. But you get a sneak peak...


Apple's Block And Tackle Marketing Strategy

Tim Cook explained yesterday why there are a million different iPods, only one iPhone, and the importance of consumers' desires and needs. But will things be different after the WWDC?


Why Almost Everyone Gets It Wrong About BYOD

To Brian Katz, BYOD is "about ownership––nothing more and nothing less." Why allowing people use of their own devices increases the likelihood that they will use the device productively.


Remote Cameras Are Being Used To Enforce Hospital Hand–Washing

Ever wonder if your doctors' hands were clean? So did North Shore University Hospital. New technology sends live video of hospital employees' hand–washing habits . . . to India.


8 Ways To Target Readership For Your Blog

Blog functionality has increased considerably in the last 10 years, but has that overcomplicated things? Here's a list from Matt Gemmell (aka the Irate Scotsman) of ways to simplify. Your readers will thank you for it.


Pricing Your App In Three Tiers: The Challenges Of Channel Conflict

Cost– and value–based pricing may at first appear in contrast to each other, but they exist for different kinds of consumers. Tomasz Tunguz explains some solutions to justify your pricing model and maximize your profits.


How Google Is Building Internet In The Sky

Google is already using blimp and satellite technology to bring the Internet to the farthest reaches of the planet. What they really want is television's white space, but they've got a fight on their hands.


You Wrote Something Great. Now Where To Post?

The writer's landscape has changed. But with so many new options comes confusion. How do authors with something to say decide where, and to whom, they say it?


Yahoo's Reinvention: Not Your Grandfather's Search Engine

CEO of Yahoo Marissa Mayer is bucking the minimalist trends she once championed at Google. Why the Internet portal may be making a comeback.


What Works On Twitter: How To Grow Your Following

Researchers at Georgia Tech University are working to shed light on one of the Internet's unsolved mysteries. Here are 14 statistically significant methods with which you can increase your presence on Twitter.


Financial Times Invents A Twitter Clone For News

With the launch of fastFT, Financial Times hopes to keep its readers closer than ever by providing a 100–250–word service for news. 8 journalists are now tasked with breaking the most important financial stories from all over the world.

[Image: Flickr user Tanakawho]

Unblocked: A Guide To Making Things People Love (Part 1)

$
0
0

"There are myriad great ideas, but great execution is a rarity." ––Everyone

If you've ever tried to start a company, you've heard a thousand variations on this admonition––because it's true. While you might secretly harbor the notion that your idea matters a lot more than the herd supposes, you also know that great ideas inevitably die without great execution.

For company leaders, there are a plethora of resources that describe best practices in hiring, raising cash, and culture building. For product–focused leaders, it's easy to educate yourself on the characteristics of gorgeous design, smart UI, high–performing technology, and piercing data analysis. And yet, while we know plenty about what great products look like, there is a relative dearth of accessible resources about how great products actually get built.

This series of articles is about great execution in the product development process. I'm trying to shed light on the question: How do people make products that people love?

The Theory And Practice Of Product Development

I will outline both a philosophy of product development process––which I hope applies across a broad range of industries and organizational sizes––and details about how one can apply this philosophy in a small– to medium–size Internet company.

Theory and practice––they both matter. Without a strong theoretical framework, even the best processes will bend and break under the weight of scaling, changes in strategy, shifting team composition, and so on. Further, a strong theory of product development processes will infuse the work of making products with the energy and perspective needed to delight your customers again and again over time. And of course, theory without practice is, well, the stuff of epic execution failures.

At the center of my philosophy of product development is the idea that your task as a product leader is to ensure that every single person on your team is constantly unblocked, or completely empowered to do breathtakingly great work in alignment with your company's goals. This happens––and wonderful things get created––when your team is driven by radical curiosity and a relentless orientation towards perpetual learning and improvement.

Where These Ideas Originated

The processes described here have been crafted and refined over years, primarily through my experience building HowAboutWe.com. HowAboutWe is the only company in the world focused on helping people fall in love and stay in love. As co–CEO, I am responsible for making top–notch products; I lead our design, engineering, and data teams.

It's no mistake that I've become slightly maniacal about process. We created HowAboutWe utterly from scratch; we had zero directly applicable background or training. To succeed we had to learn quickly and never make the same error twice. So, we put processes in place designed to facilitate constant learning and iteration. Indeed, we constructed processes that themselves improved through their very use. So far so good: Every new thing we've built has been better than the last thing we built.

The processes I will describe here solve many of the thousands of micro–problems that arise as you're rapidly building something in a highly competitive environment, and reflect hundreds of conversations I've had with expert makers from many industries. But they're also somewhat homegrown, like most good things. In a sense, I'm just trying to put into writing how a bunch of smart people in a loft in Brooklyn are trying to help people everywhere "fall in love and stay in love."


10 Core Principles Of Product Development

Before diving into the nitty–gritty details of how great products are built––roadmapping, dev sprints, design flow, and data feedback loops––I'm going to outline 10 principles of product development process.

  1. Processes are the circulatory system of a company.

    Great processes bring rich blood to every aspect of product–making and company–making; they enrich everyone's creativity, strategic thinking, productivity, and motivation. Great processes help to ensure smart, wasteless action; they not only support every single person in your company to excel, but also create a radical collaborative effect that exponentially multiplies the power of your team.

    Conversely, bad processes yield bad blood: deoxygenated, nonimmune, sluggish. So you must––and this imperative is the central thrust of this series––become f%@$ing amazing at process. Your product and company's life depends on it.
  2. Curiosity is what gets things done.

    The staple of most product teams is: GET THINGS DONE! HIT DEADLINES! SHIP ON SCHEDULE! Don't get me wrong; I believe very strongly in getting things done on time and in working your ass off. But I also believe that the most effective source of effective productivity is actually curiosity––the insatiable drive to learn. If you build your product development process around this core human drive you'll discover that curiosity unleashed is a lion, not a sheep. Curiosity is the exponential rocket fuel that makes great product teams hum.
  3. You are a scientist running experiments.

    Stop thinking about yourself as a businessman or product guy/gal or designer or whatever you think of yourself as. You are a scientist. Your essential methodology is the scientific method: Hypothesize. Build. Learn. Iterate. Repeat. This is curiosity, manifest.

    Every single person in your company should feel that they are traversing an upward spiral of effectiveness in which everything they build––whether the immediate outcome is positive or negative––is a success because it will yield the smartest possible next move.

    You are all scientists and your creed is: If we don't learn, we die. If we learn, we thrive and others benefit.
  4. Processes are also products.

    Start thinking of your processes as the most important product you are building. Approach their creation accordingly: develop them collaboratively; assess them relentlessly; iterate tirelessly; repeat.

    Building great products is difficult. Likewise for great processes. Good process thinking is not a skill with which most people are naturally endowed. You should cultivate in yourself an allergic reaction to bad processes; try to experience them as the weird form of modern industrial abuse that they are. (I'm not joking.) And then fix them. Inertia says no one else will.
  5. Great processes constantly evolve.

    There is no such thing as a universally perfect process. Anyone who says so is a narcissist or a fool.

    You need to transparently evolve your processes along with (or one step ahead of) your context. This is particularly critical in a shifting context (e.g., when your company is doubling in size every six months).

    The most painless and effective way to evolve your processes over time is to build into them structures that ensure perpetual improvement. A process that doesn't evolve will become rigid and sallow. A process that grows of its own accord will inspire trust and generate ever–increasing velocity.
  6. Pick smart tools.

    A lance is good for some things and a dagger is good for others; you shouldn't have just one weapon. Find the best tools for your tasks and master those. Build the tools you need that don't exist and master those, too.
  7. Make your processes invisible.

    Given the choice between more or less process, I clearly sit on the "more is better" side, but this doesn't mean wasting time or being anal or getting trapped in bureaucratic tomfoolery. Smart processes should be nearly invisible. They should be built into people's natural workflows. They should be as asynchronous as possible; you should walk around with an implicit fear of bad meetings. Great processes shouldn't yield maniacal attention to detail when detail isn't what's needed. Great processes should result in rapid, effective decision–making. They should generate consensus but not waste time doing so.

    "More process" means more attention to what matters and less to what doesn't.
  8. Don't forget the human part.

    Processes are most obviously comprised of the concrete systems that give structure to our workflows. But they are equally about more subtle behaviors like how you approach listening, the attitude with which you hold the helm, the way you empower individuals to become their best, the trust you inspire in people that anything broken will get fixed, etc.

    In this series I'll be focused almost entirely on concrete systems. But I urge you not to forget this second, more human aspect of great product development processes. Great things get made when people are inspired, and inspiration needs tending, like a bonsai.
  9. It's worth the effort.

    The stupendous energy it takes to make your processes whirr is completely worth it. Great processes will improve every aspect of your work, from low–debt code and breakthrough design to palpably high morale and tactical agility. Your team will become happier, smarter, and more productive. And, most importantly, your product will more effectively serve the people using it.

    So take it upon yourself to make sure everyone knows what they need to do, has what they need to do this, and understands why doing it will help the company achieve deep, inspiring goals.
  10. Stop fearing failure.

    Fear of failure is the enemy of growth and ingenuity. Growth requires doing things whose outcome cannot be predicted, and learning requires failing. Show me a successful man or woman who isn't riding into the sunset on the horse of failure.

    But the prospect of failure is very scary for most people. There are a hundred psychological and socio–cultural reasons for this. So, my point here isn't about your emotional state, though I urge you to convince yourself that the future will always be better than the present! Instead, my point is that your actions as a product leader should never be guided by fear of failure.

    Build process that let failure be an acceptable––even celebrated––source of fuel for your next victory. Fail fast and fail effectively and the future will feel full of hope––hope which will be validated as time passes.

A Map Of This Series

The rest of this series consists of granular descriptions of product development processes. At times this may seem overwhelmingly detailed. I've chosen this approach because the efficacy is in the details. Anyone running a product team has to solve the myriad micro–problems that the details I describe are designed to solve. I know that this will alienate/bore some readers, but I'm writing this series with pragmatism in mind, and with the hope that the people who attempt to implement these systems will benefit.

This series will cover four core topics:

  • Roadmapping
  • Design Flow
  • Engineering Flow
  • Data Flow

Since Engineering Flow is a biggie, we'll tackle it in several parts. In total, the series consists of seven parts. You can navigate between them below.

Two Quick Caveats

The processes described here are just one way of doing things. In a year the product development processes at HowAboutWe will almost certainly have evolved significantly. And there are certainly other ways of working that are equally (and very possibly more) effective––check out the list of titles at the end of the first part of this series for further reading compiled by the editorial team at Fast Co. I look forward to reading the comments!

If you have specific, directed feedback on this series––particularly suggestions on ways to improve my process thinking or my writing––then I'd really like to hear from you. You can email me at aaron@howaboutwe.com.

I consider myself and HowAboutWe part of a community of people who are trying to make the Internet a beautiful place; and so my hope is that this series of articles inspires you to love and respect process––and to make your processes elegant, ever–evolving, and radically productive.


Click here to read the next article in this series: Value–Driven Product Development: Using Value Propositions To Build A Rigorous Product Roadmap


Other Things Worth Reading

The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, by Eric Ries

The Lean Series, O'Reilly, Eric Ries, Series Editor

Running Lean, 2nd Edition: Iterate from Plan A to a Plan That Works, By Ash Maurya

Lean UX: Applying Lean Principles to Improve User Experience, By Jeff Gothelf

UX for Lean Startups: Faster, Smarter User Experience Research and Design, By Laura Klein

The Four Steps to the Epiphany, by Steve Blank

The Entrepreneur's Guide to Customer Development, by Brant Cooper & Patrick Vlashkovits

The Innovator's Solution& The Innovator's Dilemma, by Clayton M. Christensen

Crossing the Chasm, by Geoffrey A. Moore


You are reading Unblocked: A Guide To Making Things People Love, a series in seven parts

Other Articles in This Series

Part 1: Unblocked: A Guide To Making Things People Love (You are here)

Part 2: Value–Driven Product Development: Using Value Propositions To Build A Rigorous Product Roadmap

Part 3: Engineering Flow: Planning For High–Velocity Sprints

Part 4: Facilitating High–Velocity Engineering Sprints

Part 5: Design Flow: Achieving Breakthrough Creativity And High Yield Production

Part 6: Data Flow: Using Data to Constantly Improve

Part 7: Concluding Thoughts: Your Job is to Make Wonderful Things


[Photo by Opensourceway on Flickr]


This Open Source Twitter Replacement Is Absolutely Brilliant

$
0
0

We're living in a post–Snowden era, which means you should assume that everybody, including the social networks you use every day, are always looking over your digital shoulder. So how do you take back your online communication from the prying eyes of the government? Enter Trsst, a new social network that promises to be a white knight of media, keeping all of your cat posts and private messages safe and secure forever.

Trsst was built as an alternative to today's social media and blogging platforms, which not only cede to every whim of government content inspection, but can also reverse course on content ownership policy and feature development to tighten control over the service experience (remember Twitter's API lockdown last July?) at a moment's notice. These social media platforms have also largely shut down syndication and kept content inside members–only walled gardens.

Trrst wants to be entirely open and as reliable as possible. That's why the service chose to syndicate content via RSS, which has been the Internet's simplest way to "follow" people and content since it was first publicly released in 1999. If you're creating public posts on Trsst, users need only to subscribe to your RSS feed, no signup required. Posts and blogchains are digitally signed (proof that nobody censored your content or eliminated a blog response), and fully indexable by search engines. If you want to send private messages, you don't need to worry about third parties reading them. Trsst itself can't crack into your content, they claim, because you keep your own encryption keys. That also means they can't hand your stuff over to the government, even under court order.

Privacy and identity is handled by these same encryption keys. Your account's public key is signed to all of your posts, and if you want someone to be able to view your private posts, Trsst lets you add their public keys, which gives them access. Private keys are used for group messaging: When somebody joins or leaves the group, the system auto generates a new private key and distributes it to all of the members.

Journalists, activists, and revolutionaries need not fear man–in–the–middle attacks. The peer–to–peer connections themselves are encrypted, so sneaky listeners can't intercept content. Because the service is built around peer–to–peer connections, users can easily move around firewalls by going through other users' connections.

If you want to monetize your blogging content, Trsst even provides a digital wallet built in to your account. Envisioning long–term pay–per–view content consumption, Trsst helps bloggers collect microtransaction payments through digital crypto–currency like Bitcoin––for views, shares, likes, or even anonymous donations.

All of this digital cloak–and–dagger defense makes it sound complicated, but Trsst abstracts away most of the hard stuff. Advanced users can of course access and control their keys, but for most users, the frontend will look just like Twitter or Facebook, complete with posting, following, friending, creating groups, and sending messages. So watch out, digital spooks: Your job may soon become a lot more difficult.

[Image: Flickr user Lars Plougmann]

$20 Million Short, What's Next For Ubuntu?

$
0
0

Ubuntu fans argue that the Edge crowdfunding campaign is already a game changer, despite reaching less than half of its funding goal. After all, it has brought to the forefront the concept of convergent computing. It's garnered Ubuntu big backers like Bloomberg LP, and it's sparked plenty of praise in the tech press.

But since the campaign was an all–or–nothing venture, there will not be an Ubuntu Edge to speak of––at least for the foreseeable future. So the question remains: What's next for Ubuntu and its quest for a share of the mobile market? How do they recuperate from the blow of the failed campaign while capitalizing on all this momentum?

We sat down with Mark Shuttleworth, Ubuntu's founder, to talk about the release of Ubuntu Touch, convergence, government surveillance, and competition from Android and Windows 8.

Given that the Indiegogo campaign looks like it's going to fail, what's the next step for Ubuntu Touch without the Ubuntu Edge?

We're on–track from a software engineering point of view to develop a first release of just a mobile–focused Ubuntu in October. So that'll be a 1.0 of the phone version of Ubuntu. We're having pretty detailed conversations with various manufacturers and carriers. The public portion of that is in the Carrier Advisory Group, which was formed a month or two ago, and which we closed at the end of July to new members. I think we ended up with about a dozen carriers in there, including guys like T–Mobile, Verizon, Smartfren in Indonesia, and China Unicom. So a pretty good range of carriers.

We were focused on getting our 1.0 into the hands of manufacturers, with a view to shipping something based either on that or the midpoint of the next cycle in early 2014. And so as much as everyone on the team really wanted to see us kind of skipping a few generations on the hardware cycle and going straight to where we think things need to go––which is convergence––there's still plenty of work for us to do, just getting onto mainstream mid– and high–end phones. They're not PCs in your pocket [like the Ubuntu Edge], but they'll still run Ubuntu itself beautifully from a phone experience point of view.

Looking at what the tech press has been saying, there have been some people saying that if the Edge campaign doesn't succeed, it might be curtains for Canonical or at least for your involvement with Ubuntu. Is there any truth to that?

No, there's absolutely no truth to that. I would characterize the Edge as a bit of a labor of love on the side. Our mission is still Ubuntu. What was interesting about the Edge was that we were almost on behalf of our hardware partners digging into hardware innovation, crowdsourcing––not just crowdfunding, but crowdsourcing––tapping into that spirit of saying, "Well, what's interesting? What could work?" And in a way that maybe the more conservative, traditional hardware industry isn't able to do. But I saw that as something that we could do to advance the state–of–the–art across the whole industry, rather than something central to Canonical.

And for me personally I kind of fell in love. With the Edge, I thought, "Wow, we could really move things forward if this device comes off." And our institutional commitment to convergence isn't really limited to one device. It's the whole ethos of our design and the user experience. A great deal of the engineering has gone into the Ubuntu mobile OS and has been designed to line up perfectly with both of PC and cloud offerings. And our interest in that topic of convergence is, in a sense, much broader than the Edge because all of the pieces of Ubuntu essentially are lined up around that idea that they would all converge. And it's noteworthy that the OS that we would put on the Edge is the same OS that's going onto the HP Moonshot ARM devices. So you've got data centers being reinvented around ARM silicon and it's powered by exactly the same OS we were proposing to put on the Edge. So I think that this convergence story continues. It's sort of like the grand unification as you increase, as you increase the imagery level, there's forces lined up and become, becomes different faces of the same underlying thing. And as we increase the capability of CPUs from mobile to embedded to PC to surveys suddenly it becomes I think economically interesting to look at using exactly the same platform for personal computing.

So there's no chance of any OEM being interested in producing a convergent device at this stage?

Well, you know even this week we were chasing up some outreach we got from a couple of interesting angles. We've had folks––pretty senior folks––from a couple of major companies reaching out saying, "Hey this is really interesting. I'd like us to figure out if we can do something together around it." And yet, we haven't had a breakthrough that essentially could leapfrog us much closer to where we could green–light the project and go into production. At this stage, I think it's a very outside chance that one of those threads turns into the sort of commitment that would get us to open up in terms of green–lighting a convergent device.

So to look at Moore's Law and the historical trend of those devices, we would see something like the Edge be real in two to three years, just in terms of the RAM, CPU power, connectivity, and so on. And then there are some things in there which are not commercial–driven, like the screen technology and the battery technology we're interested in. But leaving that aside, just in terms of the capabilities required for convergence, we think high–end phones will get there on their own in two or three years' time, and the Edge was just going to short–circuit that to nine months. That would have been great.

In some ways, it seems like convergence was the killer feature of the Edge. So how do you see Ubuntu Touch stacking up and gaining a market share in the already saturated mobile market?

Right. I think that's a fair question. I think the short–term answer is that we need to find an audience that really wants something clean and simple and beautiful. In our conversations with carriers, they identified about 20% to 25% of their users who insist on having a smartphone but don't actually use it as a smartphone in the sense that they don't use a lot of apps and they don't really download content. Their focus is just upgrading on the contract to newer phone.

For carriers, that's a bit of a problem because those customers aren't driving a lot of data traffic, which is a big contributor to the profitability of mobile contract carriers. And so, before us, there is something of an opportunity because those are users who are quite possibly interested in having something stylish, beautifully put together that is relatively new from an app portfolio point of view. So in the short term, we think that's the opportunity, and we're working with carriers to get Ubuntu adopted with that audience. It can lift the high–end device aimed at the mainstream audience that's usually app–centric.

In the longer term, the real question I think is whether we can build the app portfolio. I think we have a pretty diverse story now in that we'll have quite a few I think of the top 50 apps. The vendors are quite excited about Ubuntu and they say that they will support Ubuntu. But for the long tail, we have to provide a very good mechanism for developers to port their application. For HTML5 apps that's very straightforward and we'll have a large portfolio of those on Day 0. The next thing we have to work on is people porting their apps from Android. And I think we're in a pretty good position there because a lot of Android apps are actually developed on Ubuntu desktop. And so we connect very easily for those developers to support both Ubuntu and Android with the same Java code base. There's work to be done there, but I think in the medium– to long–term, it's all about apps. There's sufficient momentum in the Android market, which is close enough to Ubuntu that we can make it easy for those developers to target both Android and Ubuntu with the same code base.

That's interesting to hear because I know Canonical has also shied away from fully supporting Android apps on Ubuntu.

Yeah, there's a sort of nuance to that. What we don't want to try to do is to make Ubuntu appear to be Android to the app. In other words, we don't want to try and do what BlackBerry did, which is to say, "Well, you can just throw an Android app at it and it'll work." And the reason for that is really user experience; it would feel like a fish out of water. You can do that. You can make an app run. But it's sort of weird. Whereas the other way to do it is to say, "If you want to support Ubuntu, it'll be a little bit of work. You have to understand the Ubuntu user experience and the conventions. And then you have to think about how to express some aspect of your application in that language, as it were." Then the implementation of that will still be relatively easy, right? It's the same code base in Java, but there's an additional set of APIs that you use, and you conditionally either use these APIs for Ubuntu or those APIs for Android. That's work that still has to be done because we didn't scope that work for the 1.0 of the product––right now our core focus is getting in with carriers to an audience that isn't very app–centric.

What do you think is compelling about Unity as a user interface that might compel a new audience to switch over to Ubuntu?

Well, for example benchmark testing of Unity versus Windows 8 is very successful. People mentally have a much clearer idea of what's where with Unity than Windows 8. So I mean, that's a very successful find. I can appreciate what Microsoft's trying to do, because we've done it ourselves. Conceiving a new complete user experience is pretty challenging. And I'm pretty proud of the fact that we extend well ahead of them on based on user testing in terms of navigating and being productive.

In terms of the real sell, a lot of what we put into Unity was specifically to be able to run phones, tablets, and PCs together elegantly. So that becomes an easier argument to make once we have phones and tablets running it, right now we only have PCs. So I think the real value of the approach that we've taken has yet to fully sort of show itself. Once it's on our phones, our tablets, and PCs, then the coherence of that set of devices will I think stand apart from the crowd.

The Edge crowdfunding campaign obviously gained a lot of attention for Ubuntu and helped demonstrate demand and credibility for an Ubuntu–powered device. Some have commented that it's win–win for Canonical even if the campaign fails. Was that part of your strategy from the outset, knowing you'd get a takeaway even if the Edge isn't produced?

No. I certainly didn't set out to fail. We set out to do it properly. We were very mindful of the fact that a lot of other hardware–oriented projects have found themselves in the awkward situation that they succeed in the campaign, and then they realize they've underestimated a lot of the costs. And of course, the absolute number ends up sounding like a very high number. But every device made has material costs unlike making a movie or writing a book or producing a computer game where once you covered the development tasks, the individual units themselves have very high margins.

And nevertheless, I think as a concept device, it really would help people crystallize in their minds the idea that a phone should be the center of their personal computing, in a very profound sort of way. For most people, smartphones now are kind of essential companion devices. You couldn't get by without it. You absolutely have to have it. And I think the real beauty of the Edge is it got people thinking about the phone as the center of the whole personal device ecosystem. There are these two sort of fundamental sort of forces at work. One is everything is becoming smart, right? Everything is becoming attractive. Every piece of electronics is kind of growing a smart stream, right? And on the other hand, this idea that, well, in fact, you only really want to have your credentials in one place. You don't want to re–create your credentials in tons of different places. And the truth out of I think operating approaches to that, one is to say, "Well let's use this kind of convergence idea, this idea that the phone can really drive everything." Or this ChromeOS type of idea, the idea that everything is just powered by the web. And so when you log in to any device, you're suddenly logged into the cloud and have a similar experience across multiple devices. I think those are two both very powerful ideas. I think there are both advantages and disadvantages, notably in a PRISM era. If all of your devices are dependent on the cloud, well then you have no confidentiality whatsoever.

Right, with the news of PRISM and NSA surveillance, there's been an upsurge of interest in open source as a way to maintain privacy. Do you share the opinion that open source is better for an individual's privacy and security and do you see Ubuntu benefiting from that increased interest in open source?

Well, it is a tantalizing promise, right? There's lots of critical research that shows a higher baseline quality of code when it is subject to public scrutiny. I think it is also being naïve to think that the mere open source–ness of something is a guarantee of security. Particular tools depend very much on practices associated with them.

We've actually long been asked to do a human rights version of Ubuntu, which guarantees the confidentiality of information stored on the disk and so on. The problem with that is it's very hard to imagine a way of doing that that couldn't potentially be defeated. And the thought of giving someone a false sense of security is very disconcerting. Personally, I use Ubuntu and I know a lot of security–conscious people who use open source as a matter of principle. So I think like all things, it would be wrong to hold up a panacea of perfect security just through open source. But I think if you are serious about this stuff, then yes, you probably are a fairly heavy user of open source.

Looking ahead, what's your goal for Ubuntu in the next two to three years?

On the client side, I think it's clear: We have to make a material impact in the mobile scene. Internally, there's a tremendous amount of satisfaction that Ubuntu is on more than 20% of Dell PCs globally. We've celebrated a critical launch of Ubuntu on HP PCs in South America, for example. There's a real sense that we've cracked the goal that was set in the open source community for having an open source platform become mainstream.

For myself, I think we have to be extremely mindful of the fact that the future is very, very dominated by mobility and by touch. So I feel a victory only in a nominal sense, to become a major player on the desktop just as the desktop itself is fading from its central position as the backbone of personal computing. So that's why so much of my attention and the team's attention is focused on mobile, because we know we have to be right at the center of that unification of personal computing.

Smartphones are extraordinary. They're a sort of anything machine; you can do anything on them. But at the same time, increasingly, people say that they don't expect that they'll only carry a smartphone around. The reason for that is within the first five years of smartphones, people were astonished at how much they could do on a phone that they had not previously thought they could do on a phone. And there's a natural tendency I think to extrapolate that, "So therefore, in five years' time, I think I'll only carry a phone." And if you look at the statistics of people we asked the question over the years, the number of people who think they will only be carrying a phone in the future is dropping. And the reason for that I think is that folks are realizing that doing email on the phone is only good for certain kinds of email, And for real productivity you need the traditional form factors of a bigger screen and a keyboard.

And so the key question then becomes how you combine those ideas, how you give people mobility the best, so they're immediately attracted to, but also the productivity of those bigger form factors. And so that's what's at the heart of convergence.

So to answer, it's a big game. And it would be trite to be blustery and say, "Well, let's charge into this territory and kick some ass." But I think we have a very credible offering. We're really looking forward to the 1.0 of the mobile story in October. And in two years' time, if we have made a consumer impact in the phone market, if we can take 15% or 20% of the Android share, then I think we'll be making a tremendous difference in the industry.

If You Want To Sell Digital Products, Bundle Them With Physical Ones

$
0
0

Digital products have many benefits––frictionless buying, immediate delivery, and no shipping. But marketing isn't one of them. Digital products have no glossy product shots, no features in gift guides, no celebrity photo–shoot endorsements. So increasingly, companies are pairing digital downloads with physical goods to try to piggyback on their go–to–market strategy. But is it working?

Digital products do almost nothing to sell physical ones. A customer looking at a shelf full of headphones, of course, might pick the ones with a 60–day Rhapsody trial attached, but certainly not because of Rhapsody. Presumably, someone looking for new headphones already has music they're anticipating pumping through them.

Physical products, however, can sell digital content as Apple showed with the initial iPod/iTunes relationship. Beats is the latest company looking to take their market lead (70%) in the premium headphone space and turn it into success online in streaming music, something no one, arguably, has successfully done yet. Spotify, with the most awareness, still hasn't appeased both customers and artists to the tune of financial stability.

Rhapsody claims the new partnership with 50 Cent's headphone company SMS is not a reaction to Beats buying the streaming service Mog. "If we were going on an offensive, this is not the tack we'd be taking," says Jaimee Minney, head of PR for Rhapsody. As much as the timing of Rhapsody partnering with SMS does closely align with the MOG re–branding and launching of Beats' own streaming music service, this type of deal is par for the course. "In recent history, we have inked similar deals or bundles with Sonos and Jambox" says Minney. Clearly though, we're not yet a society that values digital products as much as physical ones with HTC trying to use Facebook, Best Buy and Walmart trying to use CinemaNow and Vudu, Roku using Netflix, and Sonos using most digital music services.

A natural fit to partner a digital product with a physical one, but most actually don't work out. Within the first month of HTC's Facebook phone arriving, it was discontinued. Best Buy and CinemaNow have not seen the success they were expecting, and clearly Napster's attempt to partner with MP3 players didn't pan out.

How do you determine the value for a streaming music service to partner with a headphones manufacturer though? Just as you might expect, the amount headphone buyers that end up subscribing to the service. Breaking that down, Rhapsody's Minney says "We calculate the economics of all these deals as a subscriber acquisition cost. Our economics (which we do not disclose) assume that it costs us a certain amount in marketing/trial to convert a subscriber, and we model our trials accordingly. This is not a loss leader, nor is it any different than the kind of co–marketing deals we do with anyone."

Rhapsody's theory that "Anywhere people are spending a premium to listen to music, we want to be there with our premium music service," makes sense, but unless you're Netflix, the bundled digtial service is still seen as a second–rate citizen.

[Image: Flickr user Kelly Teague]

What Hackers Should Know About Machine Learning

$
0
0

Drew Conway is the co–author of Machine Learning for Hackers and must be one of the few data scientists out there who started his career working on counter–terrorism at the Department of Defense. FastCo.Labs talked to him about algebra, GitHub, and the ugly side of Machine Learning.

Why should developers learn Machine Learning?

I don't necessarily think that every developer should learn Machine Learning. Machine Learning as a discipline is interested in the application of statistical methods to decision making. If your job as an engineer is to build large systems that have nothing to do with that, then I wouldn't say that you should learn it. The process of learning it can improve your overall statistical literacy and I would say that's a general benefit in life.

Why did you write the book?

We were familiar with the reference texts around Machine Learning and all of those reference texts require a pretty substantial foundation in linear algebra, calculus, and statistics. We wanted to create a reference book which was more geared towards practitioners who were used to thinking algorithmically. We wanted one that didn't require a lot of math, didn't require a lot of statistical training.

What are the biggest gaps in the average hacker's knowledge when learning Machine Learning?

A college intro level probability class so that you could learn how different probability distributions are reflected in the real world. Why do we care so much about the normal distribution? What is it about the normal distribution that's so fundamental to the things we observe in nature versus a binomial distribution? What kind of processes and phenomena does that represent? Then in terms of actually doing the work, linear algebra and matrix algebra. You get the probabilistic stuff so you can understand the framework of thinking about how things work and then often the linear algebra and the matrix algebra is how it gets done in software.

Someone who is a professional scientific researcher probably understands all the stuff about cleaning up data––that's their bread and butter––whereas a professional software engineer understands how to build from the ground up but hears less often "Here's some data. I need you to tell me what's going on." The "here's some data" part is the really ugly part of cleaning it up, creating a matrix out of it, etc.

There's a curiosity that's required to do this stuff, looking at a data set and thinking what is an appropriate or interesting question to interrogate with the data––that exploratory data analysis step. I have a new dataset, I'm just going to sit at the command line and look at the density distributions and do some scatterplots and see what the structure of the data is. I think that requires some practice but also some intuition about the data generation process. Of course if you don't have any training and you have never done any of this stuff before it may seem a bit opaque at first. For most the developers I know that have no background in that, that can be a bit intimidating in the beginning.

What are the differences between doing a Machine Learning project and a development project?

Data analysis as an exploratory endeavor should be the first part of anything. You should never go into a project and say "The thing that I want to do is classification so I'm always going to run my favorite classification algorithm." For the first half of the book we talk about "Here's a dataset, here's how to clean it up." The chapters that John Miles White wrote on means, medians, modes, and distributions are always the things that you should do in the beginning. We want to hammer home that it's not just input–output. Input, look around, see what's going on, find structure in the data, then make the choice for methods. And then maybe iterate a couple of them. It's very cyclic. It's not linear.

A data scientist has a very different relationship with the code than a developer does. I look at the code as a tool to go from the question I am interested in answering to having some insight and that's sort of the end of that. That code is more or less disposable. For developers they are thinking about writing code to build into a larger system. They are thinking about how can I write something that can be reused? People who do large–scale Machine Learning, people at Google and Facebook, think in a similar way to a software engineer in the sense that there are lots of different interesting Machine Learning tools and methods that people can use that don't scale well to the web–scale datasets those companies are dealing with. So at the beginning their process is more like, well what is the limited set of tools that I have which can actually scale up and be useful in this question?

There are different levels. There's exploratory research data science which many people coming into jobs from academia do, and they are building tools which are more like minimum viable pieces of technology. In some places there are people who do that, but then have to figure out a way to optimize that at large scale, and then there are the people who work on production systems who are writing code which is going to be used all the time as part of the product itself.

Do we need a GitHub for data analysis?

The real limitation of GitHub is that it's not meant to be like S3 where you can can store a ton of data on it. The data limitation is a significant one. In reality I think it's fine for the data to be separate to the actual analytical code. The thing that I think was missing for a while was an appropriate way of conveying results. Most people who do data eventually get to the point where they have a graph or they have something they want to show you. But now with GitHub pages people do that all the time. If you look at Mike Bostock's stuff for D3 (a JavaScript library for visualization) his stuff is all on GitHub, he uses GitHub pages and he does a great job with it. GitHub really gets you 80% of the way there. The data portion is the real limitation but that's okay because everyone is going to want to use a different type of database, a different data structure for their project.

What would you add in a new edition of the book?

There's lots of new methods that we would certainly add. One of the things that we don't do at all in the book is ensemble approaches to Machine Learning, combining multiple methods. We don't talk at all about model fitting and evaluating quality of models. Those are certainly things we would do in a second edition. Part of the reason that we didn't do it in the first one is because they are more intermediate level things and we were going for a novice audience.

My thinking has evolved on presenting results. The way I think about presenting results now is always in the browser as an interactive thing. There's a tremendous amount of value in providing the audience with the ability to ask second–order questions about what they are observing rather than first–order ones. Imagine the thing you are looking at is just a simple scatterplot and you see one outlier. So a first–order question would be who is that outlier? If you have an interactive thing where you can go over the dot and it tells you who that is, and the second order question is why is that an outlier?

You can get pretty far with Machine Learning for Hackers but our hope is for those who want to move from hacker to real Machine Learning engineer, that they will go out there and build on the fundamentals, go out there and read Bishop and Hastie.

[Image: Flickr user Dustin McClure]

Oh, If The Walls Could Compute...

$
0
0

Small isn't always beautiful in electronics, it turns out...and neither is newer always better: An innovation from Princeton University takes ideas from an invention of 1920s vintage and combines it with really large scale circuitry to turn huge areas of plastic into electronic devices, including––critically for our wireless world––radios.

When you think of a circuit you probably imagine either something printed in copper on a motherboard, a collection of wires on a breadboard or perhaps the microscopic tracks of semiconductor on a chip.

But recent innovations have seen various research teams placing circuits on much more exotic, perhaps even bendy materials...often as part of a move to improve sensor tech or display screens. Printing circuitry on plastic is part of this revolution, and it's been going on for a short while...but with limited success. That's because when you try to fashion a conducting circuit onto plastic you often need high temperatures––conductors tend to be metals, after all––and this can deform the plastic substrate. That's where some super clever thinking from Naveen Varma's team at Princeton comes in.

Essentially the Princeton research has improved on recent inventions in creating not just circuits but actual electronic components onto plastic substrates. These came from another Princeton team who realized they could create electronic structures, like thin–film transistors, using amorphous silicon instead of crystalline silicon. Crystalline silicon is the hard, hyper–precise gray material that you'll know as forming part of traditional silicon chips. Amorphous silicon is acutally a lot more randomly arranged, and the innovation in creating a thin film transistor out of it is due to the fact it can be produced at temperatures closer to 300ºC instead of around 1000ºC for crystalline silicon. That temperature is just about tolerable for plastics. But to make this work, the scientists had to sacrifice some of the electrical properties of the material. This made making plastic–backed transistors out of the stuff possible, but the devices were unsuitable for more complex electronics needs––for example, in creating a radio.

Varma's team took this research and combined it with some 1922–era research by the chap who invented FM radio, Edwin Armstrong. In his time the transistor was fantasy, and he and fellow engineers and scientists had to perfect circuit designs using valves––powerful, but very tricky analog circuits. The Princeton research has co–opted one of these designs, called the super rengenerative circuit, into their new plastic–printing technology. And lo and behold if it doesn't make amorphous silicon transistors work in a much more reliable way.

Why is this tech, which sounds like so much arbitrary stuff, exciting? It's actually huge. As in actually "huge." By allowing the creation of large–scale flexible radio circuits and sensors on a plastic backing, the team's innovation could create very cheap, very large sized pieces of connected electronics. Think of room sensors embedded in the wallpaper that scan for occupants, then wirelessly share that data with a security system or a home automation device (possibly requiring no independent power, if a thin–film solar cell is part of the plastic–backed circuit too). Or how about a totally invisible comms system that could send data across large buildings at will, without being troubled by the normal sorts of radio interference that affects Wi–Fi.

One application could be structural monitoring of large components, such as load–bearing plates or joints in bridges or buildings. Modern ways of assessing the integrity of these objects do work, but they're often imperfect and stunted in effectiveness by the sheer size of the object in question. Think of bridges that automatically report on cracks in huge joists, or aircraft wings that radio in when they detect a structural flaw.

[Image via Flickr users: Adreson Vita Sá, JamesIrwin, dobroide]

Is The Web Our Path To Immortality?

$
0
0

Today's News Scrum Discussion: Is It Possible to Achieve Immortality Online? by Keith Collins on Slate.com

One of the things Martin Manley had to consider before committing suicide last week was how long a website could last. This, after all, was a big part of the retired newspaper reporter's plan to take his own life: to leave behind a website comprising more than 40 pages in what he called "one of the most organized good–byes in recorded history."

There are people in medical science today like Aubrey de Grey who believe the first person who will live to 150 has already been born. That is to say, medical technology is advancing faster than our life expectancy, which will mean at some point––theoretically speaking––it will be able to outrun death.

This spells nothing but disaster for the concept of "progress" as we know it. The one benefit of "natural" life expectancies is that they remove roadblocks (i.e., aging citizens) for younger generations to change and adapt the world to their needs and tastes. If I had my druthers, you wouldn't be able to vote past the age of 80, just like you can't vote under the age of 18. At certain points in the human life curve, your interests are not aligned with the bulk of humanity that is working, living, eating, and buying on their own.

The idea of physical immortality is just as threatening––but far less immediate––than the idea of informational immortality. Today, we forget relatively quickly what prior generations thought and believed, which is somewhat freeing, in that subsequent generations are relatively free to invent their own path.

Sure, we may lose some virtue when older generations die, but much of the worthwhile stuff is later resurrected. One example is the "green" movement, an outgrowth of hippy–leftist preservationism that started in the '60s and '70s, but went somewhat fallow in the '80s and '90s.

If I ran a web hosting company, I would tinker with the idea that every web page have a parking meter on it; if people want the page to stay up, they can donate money to keep up hosting. If not, the page goes into some vault or archive. We can't keep adding data to our servers as if there's no cost, and the old stuff should be the first stuff to come down. ––Chris Dannen


To be remembered only as one of many who asked, in a spectacular way, if we could be remembered online forever seems to be the most ironic form of meta the Internet has seen.

Since the issue of keeping current and future data around forever has been more prevalent in recent years, it's more likely to actually happen. Brought up by the article though, will it matter? "When is the last time you saw a web page from 1983? Much of the content of the early Internet has been purged or taken offline, and even the Internet Archive's Wayback Machine only goes back to 1996."

There's also the issue of data overload. By some accounts the NSA is capable of monitoring 75% of U.S. Internet traffic, but just because you have the data are you able parse it and find what you're actually looking for? Just because data will most likely live on forever, doesn't mean the information will be accessible. In the case of Martin Manley, he'll likely be remembered by a few extra people, but probably mostly by close friends and relatives, which would have been the case without the tragic end. ––Tyler Hayes


The human desire to be immortalized through time is fundamental, and it makes sense that digital archives would be an appealing option for establishing an eternal life. In response, social media sites, like Facebook, have already created ways of commemorating someone after they die and fixing their profile in place. And Martin Manley's website is another example of an effort to live on online.

But forever is a long time and like Walmarts built on burial grounds or tombstones that have fallen apart, it seems clear that the remnants of our online selves will eventually disappear in server migrations, data purges, or company evolution/demise. As Tyler points out, data that are recorded but inaccessible are effectively lost.

Collins writes, "We are, as a species, legacy–builders. But in our quest to leave our mark in the Information Age, we've begun to look beyond the finite, beyond the physical, and into the digital space." But really I think people have always vied for digital permanence, even before the concept of "digital" existed in its modern form, because the ultimate immortality has always been to be remembered in the collective human brain trust. I'm gonna quote The Iliad now, because Achilles was a flawed individual/pastiche who wanted to be remembered forever and actually has been. In Book IX he says:

O'er me double destinies impend:
That should I at the siege of Troy remain,
Immortal glory will my portion be,
But never shall I see my home again.
But, on the other hand, should I return,
Glory I lose, but length of days is mine.

If the Internet had existed, he definitely would have wanted his website archived. But whether or not it would have been significant and seemed worth saving in the long run would have been subject to the same random chance that made Homer's works so iconic. Also, the Embassy to Achilles would have probably happened on Google Hangouts. ––Lily Hay Newman


We may want a digital legacy, but would we even use it? The future of the Internet is not a broad meadow––it's a landscape so large that the area beyond its curvature is inestimable, and it will be a million treadmills of content that users hop between. Even the most conservative Twitter and Facebook users know the futility of "keeping up" with tweets and the News Feed. It's the "lifestream" that David Gelernter predicted on Wired 16 years ago and claims is now arriving to replace the public's notion of a flat Internet with static, eternally retrievable content.

So it's puzzling to assume that our lives will be forever present on an ever–refreshing content landscape. Are you still worried that people will find your Xanga/LiveJournal snapshot of high school? Seriously––only the most infinitesimal fraction of celebrated people's graves are visited, and after several generations, even familial descendents will likely have moved far from burial sites, and the graves are forgotten.

The graves remain, as may our websites, if preparations are made. But why assume that our digital world, with its ever–increasing glut of content, will even make our "digital legacies" findable as Google's indexing algorithms game relevance in more satisfying ways? It may require an entirely different search engine system to forcefully break away from relevance–and–immediacy–fixated indexing into a search for an ancestor's disparate pieces of Internet footprint that haven't been indexed in decades.

As Lily notes, immortality is also a product of chance. If Homer's contemporaries had left behind millions of Pinterest boards, status updates, and Instagram photos, would we care enough to sift through them? Or would we bless the artifact that rose to the top, extract meaning from its window to the past, and move on with our lives? – David Lumb

[Image: Flickr user NAPARAZZI]

How Al Jazeera Botched Its American Network Launch

$
0
0

With its promise to counter–program the shallow infotainment–based American TV news landscape with in–depth, quality news, Al Jazeera America (AJAM), which launched yesterday, was supposed to answer the prayers of news–starved American audiences. Instead, when it went on air alongside sister networks Al Jazeera Arabic and Al Jazeera English, it left some viewers in the dark.

The idea behind Al Jazeera America is that it will apply the brand's hard–won reputation for high–quality international news, which has spread to the English–speaking world via sister network Al Jazeera English, to American stories. And before the network launched, that vision seemed to be on track. By acquiring Al Gore's Current TV network and bringing respected former CNN anchor Soledad O'Brien on board for its flagship program, America Tonight, AJAM was the subject of quite a lot of hype, especially among media critics.

But yesterday's launch ran into one major hitch: Al Jazeera's devoted viewers in the U.S., who have for years relied on online streaming of the company's Al Jazeera English channel for their international news, found themselves unable to watch Al Jazeera English on the day that AJAM became available in the U.S. As of yesterday, Al Jazeera English serves the entire English–speaking world except for the U.S.

What went wrong? Al Jazeera declined to comment on why they pulled Al Jazeera English off the digital airwaves, but speculation is that they were likely required to do so by the few cable companies willing to carry the network.

"It's certainly a loss for the viewers," says Matt Wood, a lawyer and the policy director for Free Press, a leading national media reform organization that blends policy work with grassroots activism. "We here at Free Press and other consumer advocacy groups certainly think that viewers, whether they're on TV or online should be able to get information from as many sources as possible."

The decision smacks of the anti–competitive American media environment that has plagued cable television in recent years. With the web becoming the primary vehicle for video content for more and more viewers, it's clear that traditional television providers will do anything to cling to the vestiges of their crumbling oligopoly.

"We want to have that kind of open marketplace where you have cable competing against online video rather than cable controlling online video as they want to do," says Wood, who has himself worked on cable licensing agreements.

Cable companies know that the web is the future of their business, which is why many now offer their own exclusive streaming services that users gain access to when they subscribe to the cable network. By piggybacking their streaming product onto their existing cable subscription, these companies hope to keep lucrative subscription fees without losing users to online services that could offer better deals on smaller packages of content. But Wood says the practice may not be fair.

"Anti–trust has this concept of 'tying,' which means that when you have market power in one area you can't use that market power to create power in another area," Wood explains. "That's the same theory that Microsoft got rung up on when anti–trust enforcers said they couldn't use their operating system power to grow a monopoly in the browser market. That's the same behavior we see here when cable operators say you can only get online programming from them and only if you buy a traditional TV package from them."

In other words, we need to allow free and open competition between the web and TV in order to remedy this situation, something that, given the behavior of cable companies, needs to be legally enforced. Given this situation, AJAM may be more a victim of current market conditions than anything else. Determined to get onto American television sets, pulling Al Jazeera English from the web was the devil's bargain they had to strike.

But given the choice between web–only or TV–only, it's baffling and even counterintuitive that the company would choose TV. Al Jazeera executives have explicitly stated that they hope to attract younger millennials and college–age viewers. By choosing to launch on TV instead of on the web, that's exactly the audience they're likely to alienate. Cord–cutters (people who have no cable TV subscription) are overwhelmingly young and educated, precisely AJAM's target audience.

Perhaps they assume that this same young, educated, tech–savvy audience will have no trouble setting up VPNs to route their web traffic through Europe to get their Al Jazeera English fix. Twitter is already abuzz with disgruntled viewers discussing the fastest VPNs that they can use to watch Al Jazeera English. But betting on users figuring out a workaround feels like a big gamble that could cost Al Jazeera the loyal U.S. viewer base it built online.

It's a remarkable about–face for Al Jazeera, which for years has been hailed as an innovative, digital–native news organization. The network has won several Webby Awards, including one this year, which they ironically used as an opportunity to hint at the cable–only launch of Al Jazeera America.

The decision to take a circa–1999, cable–first distribution strategy with AJAM feels all the more anachronistic in the wake of recent attempts to push web/TV convergence forward by tech companies. Devices like the Chromecast and the new line of TiVo set–top boxes, which aim to disrupt the traditional barrier between the two, would have been natural places to launch Al Jazeera America for a young, digital–savvy audience.

Al Jazeera may very well succeed at building a new audience in the U.S. or finding ways to reengage its shunned online support base. In the meantime, it feels a bit like they're shooting themselves in the foot by going cable–first. One can only hope that AJAM sorts out this problem, because the U.S. news landscape could certainly use more hard–hitting, investigative reporting.

I was told, for instance, that last night's premiere episode of Faultlines on AJAM exposed Gap's and Old Navy's use of sweatshop labor in Bangladesh. I wouldn't know, though. I don't get Al Jazeera America, and I haven't set up my European VPN yet.


The Total Cost Of TiVo's New DVRs Is Insane

$
0
0

Remember when TiVo was the future of television, an example of disruptive technology at its finest? It made video watching different––it gave us all independence from broadcast schedules and advertising. Friends who had TiVo were cooler than we were.

But 1999, the year TiVo launched, might as well be a millennium ago. DVRs have become commodity products and streaming video over the Internet is taking over an ever–larger portion of the on–demand market. Nonetheless, the DVR, and TiVo in particular, remains a major force in video watching, which is why there are a lot of people and reviewers getting really excited about TiVo's newest Roamio DVRs.

The consensus: the ultimate in all–inclusive DVRs, really great if not quite perfect. But there's an asterisk: price.

TiVo remains the DVR leader in technology and user experience, but that's not the issue. It's hard to justify the purchase of a not–inexpensive Roamio and paying $15/month for TiVo service (which has always been TiVo's business model). Choosing to add all the other digital services to your TiVo, like Netflix, Spotify, Hulu, and Amazon, can add up to an impressive monthly bill.

Even for the low end Roamio model, which can get a user in the door for $199, the extra fees make the experience incredibly expensive.

(Entry level) TiVo Roamio

* TiVo DVR $199
* TiVo Service $14.99/month (x12)
* Netflix $7.99/month (x12)
* Spotify $9.99/month (x12)
* Amazon VOD $ undetermined
* Cable service ~$80–200/month

Total: ~$495.64/year minimum + $199 DVR

Moving up to the Roamio Plus adds another $200, and for those never leaving the vicinity of the television and getting the Roamio Pro, you'll add an extra $400.

The price alone makes a TiVo investment seem absurd. Even worse, how obsolete will that Roamio seem when the Xbox One, Playstation 4, or some future, market–altering Apple living–room play come out? Even though new DVR boxes are marketed as several set–top boxes in one, unless it's the only box connected to your TV, the point seems moot.

TiVo has always been a premium product, but the problem it solves only exists because cable companies care so little about what customers want. Almost 15 years later, there are lots of less expensive and more convenient options to set–top boxes. You'd think TiVo would see this and find a way to "transition" their business model.

But honestly, I'm not exactly sure what options are available aside from charging more for the hardware, which is likely a non–starter. Continuing down the current path with the new Roamios, no matter how good the product, isn't going to end well with the old business model.

Could This New Set–Top Box Kill The Smart TV?

$
0
0

Gilles BianRosa is the cofounder and CEO of Fan, a company that originally launched two years ago (under the name Fanhatten). Their first product was an iOS app, which allowed users to search and discover all there is to watch across various content providers, like cable TV, Hulu, and Netflix. Now the company is getting ready to launch Fan TV, its first set–top box and thinks they've got it right where others like Apple and Roku have dropped the ball.

First, how would you describe Fan?

Fan is the simplest way to discover, watch, and share the movies and shows you love. That's a pretty broad statement so let me bring it to life with a real example.

Say you love Game of Thrones and there's one episode from Season Two that you missed. You can go to Fan on your iPad, iPhone, or computer and by simply searching for the title you'll instantly see that you can watch it on HBO Go, Xfinity, Amazon Video, iTunes, or Vudu. Select whichever option is best for you and within seconds you have it streaming. Alternatively, if you want to watch it later, just add it to your WatchList and Fan will notify you as soon as it's available. It's that simple.

Now, say you've seen every episode of Game of Thrones because you're a die–hard fan and you're looking for something new to watch until it's back on the air. No problem. You can browse other shows of the same genre, get recommendations from your friends, or read critics' reviews to inspire you. All of the information you need is in one place so you can stop searching and start watching what you love.

Fan TV looks pretty cool. The hardware design is pretty slick. But what does it actually do?

I'm so glad you like it. If you think it looks cool, I think you'll find that what it does is even cooler. Fan TV is a next–generation set–top box that combines live TV, cloud DVR, video–on–demand, and streaming services in one powerful, little device with a revolutionary touch remote. We designed the hardware over a period of 24 months in collaboration with Yves Behar and his team at Fuseproject. The base, which is so compact it can fit in the palm of your hand, is the entire set–top box. You connect it to your TV screen with an HDMI cable, add Internet via Ethernet or Wi–Fi, and plug it into the wall for power. You're done with setup in two minutes. The remote is completely intuitive and all touch. It rests on top of the box but is battery powered so it never needs to be charged. Fan TV is the outcome of years of creative and technical design, testing, and iteration. We figured if everything else in your life is streamlined, your remote and set–top box should be no exception.

Tell me more about Fan TV's remote. A lot of people are saying it's like no remote they've ever seen.

No one loves the current 90+ button remote monstrosities. Most of us don't even know what all of those buttons do. The Fan TV remote is a touch remote with zero buttons––the first of its kind. You can navigate your TV with a swipe or a tap––much like you would by touching a tablet screen or a smartphone––but from your couch. The Fan TV remote lets you keep your eyes on the screen at all times.

What is the hardest part of creating a solution like Fan? The hardware? The software? The content deals?

With Fan TV there are a lot of moving parts that are intertwined. All of these things are a challenge: Creating the hardware to match our vision, engineering the software for a service that is so extensive, and putting into place content deals to bring this service to life. I would say the most important aspect––not necessarily the hardest––is to continue blazing a trail in this industry. We are shaking things up and doing things differently because consumers are demanding something better than what's currently in the market. Everything from creating the first buttonless remote to securing key partnerships is in service of that objective.

Let's talk content. Content discovery is becoming a hot topic in the smart TV/streaming video arena. Is content discovery going to become the differentiating factor between services? Do all services basically offer the same content?

Actually, only rental or purchase "stores" like iTunes and Vudu have fairly comparable libraries. Subscription services like Netflix, Hulu Plus, HBO Go, Amazon Prime, and Redbox Instant have VERY different offerings. Similarly, ad–supported models like linear television––or even Crackle––have unique content and schedules. We call it the "Balkans of content," and it's all but commoditized.

The digital music industry played out more or less like this: illegal downloads, then the iTunes Music Store, then streaming music services like Pandora and Rdio. Do you see the digital video industry playing out in the same way? Do people want to own or rent, via subscriptions, their content?

That's an interesting question. There are many similarities, and digital subscriptions are clearly more popular than digital purchases, but I see a key difference: You can access almost all the music you want with only one service (whether subscription, or store). I don't think this will ever happen with video. First, the way movies and shows are made, you will always have "windows" that the content will travel through, such as movie theaters, then purchases and rentals, then subscriptions, then ad–supported channels. There's no other way to make the big–budget blockbusters that people want.

Second, the studios––benefiting from hindsight from their music counterparts––will likely keep the fierce competition for rights alive, so it's unlikely one player would be able to aggregate all the rights early on, the way iTunes did with music.

Let's talk about smart televisions. Fan TV is a media streaming box. Why not make a full–fledged "smart tv?"

Fan TV is more than a media streaming box––it's the one device that does it all. Plug it into input 1 and you'll have everything you need. You can get rid of that clunky set–top box, the cumbersome DVR, the stack of streaming devices, and the pile of remotes. Fan TV will do the job of all of those.

The reason why smart TVs fall short of people's needs is similar to why you don't find DVD players in TVs : "2 in 1" products only work when the technology cycles of each component are similar. People might upgrade a TV every 5–10 years, yet the technology behind platforms like Fan TV evolves at the pace of the phone industry: every 24 months.

Also, it seems like no one has gotten the "smart tv" right just yet. Powerhouse Apple is still holding out until they can figure it out. To you, what is your idea of a "perfect" smart TV? What should it do?

Using a smart TV should be effortless and intuitive. It should know what movies and shows are available to you across all the services and devices you've already paid for, and be able to help you find new entertainment and new services quickly and easily. A smart TV should fade into the background and put your entertainment first. Both the hardware and the software have to work perfectly together. They are far from this, and are getting farther in my opinion.

Also, consumers do not want their entertainment locked into one platform. What if I have a Samsung phone, and iPad and an LG TV? Fan is platform agnostic, and works with the services and devices consumers have. We see the Fan TV device as an endpoint––we think by far the best––to the Fan service.

Designing a UI and UX for a device that has a 42– to 70–inch screen is a lot different that designing a UI and UX for software on the device the size of an iPad and iPhone. What considerations did you have to take into account when designing the UI for Fan TV?

Here is a little secret: We designed Fan for the TV first, and then adjusted it for the iPad! We then learned a lot from the Fan iOS app, including how people discover, watch, and share different kinds of video content, and applied that back to designing the Fan TV experience in the 10–foot UI of your TV.

Before I let you go, tell me one surprising thing you've discovered during your research and development efforts.

What we've gathered from consumers is that they are tired of diving in and out of a wall of apps to find their favorite shows. It eats up time and doesn't make sense on a larger screen. It's one of the pain points we've solved with our service––there's no "wall of apps." What you want to watch is aggregated right in front of you so that you spend less time searching and more time watching––which is how it should be.

Why Your Startup's Culture Is Secretly Awful

$
0
0

Startups love to talk about their culture and use it as a selling point, but product manager Shanley Kane contends that "culture is made primarily of the things no one will say." Every team's culture consists of a submerged set of beliefs and values, priorities and power dynamics, myths, conflicts, punishments and rewards. Kane wrote a blistering critique of the official line on startup culture in a blog post called "What Your Culture Really Says" and went on to slaughter a few more sacred cows in "Five Tools for Analyzing Dysfunction in Engineering Culture." Co.Labs spent a merry hour quizzing her about power, unicorns, and founder myths.

Why did you write What your culture really says?

When I was in school, I took a number of cultural studies. I studied a little bit of philosophy and some gender studies. I had a very strong interest in that field, but never thought that it would be applicable to the real world. I was surprised when I started working, at how much all that stuff was relevant. I noticed in Silicon Valley, and the tech industry in general, that a lot of people were giving these talks about what their culture was and it was really superficial and focused on the privileged aspects of the company like free food and massages and all that stuff. I thought this was pretty destructive in terms of telling people that this is what culture is. It's much more serious and much deeper.

So what is culture, then?

Culture, especially a team culture or a technical culture, often involves how you choose or prioritize between multiple things which are good. The classic example is you are trading off shipping something quickly versus spending more time on it, making sure that it's perfect before you launch it. Those are both good things. Culture often has to do with which one of multiple good things we think is most important. Sometimes overemphasizing some of those good things can have a negative impact. One of the reasons that it can be difficult to unpack culture is because it involves making these very difficult choices between lots of things which, on their own, are good.

How is power a factor in startup culture?

Power dynamics are so critical to understanding your culture. One of the things which makes it especially difficult to examine startup culture is that we (in the startup world) are so against the idea that power is functioning inside our workplace at all. You see all these ways of operating startups that are based on not having managers, so there's no traditional power structure. It's part of our self–esteem in a way. It's part of our core identity that we say we don't have a traditional corporate structure and don't have these negative power roles in the workplace.

The problem is that power is an aspect of every human interaction, even if you don't have managers. When people say "we got rid of managers" they think "we don't have to think about, or deal with or critique power in the workplace." In places where there is no formal hierarchy, you actually have to pay more attention. We are really taught not to question power, not to question authority, not to critically examine power in the workplace. Fully addressing power in the workplace means that we have to develop healthy safe mechanisms and spaces to discuss it.

Are deadlines a form of management "microaggression"?

On the one hand, you absolutely have to have deadlines. Things need to ship. Things must move forward. But I have absolutely seen scenarios where deadlines are used as a power play to set up teams to fail. You have to look at who is coming up with the deadlines? Has the engineering team bought into the deadlines? Are they realistic? What ulterior motives might people making a deadline have?

In a negative power scenario, deadlines often come from outside the team that's actually creating the technology. Often the people who set deadlines don't understand the process of building software and all of the things that can go wrong. When deadlines get really destructive is when people outside the engineering team use deadlines to try to influence the engineering team, since they don't have any other means of doing so. Marketing departments who are like "We don't know how to work with the engineering team so instead of finding ways to productively work together, we are going to give them some date to deliver to make them move faster in order to incentivize them."

Who are the heroes of founder fairy tales?

At the risk of getting too simplistic, there are two types of people that are very much mythologized in the culture. One of them is someone who went to MIT or Harvard and has a strong engineering background. That's a very specific economic and often racial set of privileges.

But you also have this other type, which is the idea of the high school or college dropout who, despite that, has been programming since they were very young, is brilliant, a genius. It's an underdog story on the surface. Person dropped out, they didn't go that traditional path but have managed to make something of themselves. However, below the surface, these are people who were able to have access to well–paying jobs, they were people who had computers when they were very young, they had the time and access to resources to develop these talents. The high school dropout story is always the story of a white guy. You never hear about women who drop out of high school and go on to found companies. I think it's very interesting that even within that narrative of the dropout–hacker–redemption story you actually have a lot of privilege operating.

Why do all startup teams look alike?

Startups in San Francisco tend to be almost entirely white men. People who get funding tend to be white men. People who are able to take economic risks are white men. There's a number of startup programs and incubators out here who give a certain amount of money that is not really a living salary, but it can be enough for one individual to get by. There are a lot of people who can't take advantage of those opportunities: People who don't have parents subsidizing their living, people who have school debt, people who have families, who need to be able to support other people whether those are parents or children or a partner.

It's this fairly narrow class of person which has a certain level of economic stability that is able to take a risk like that. So we have set this idea that only a certain type of person with some degree of economic security, who has very little ties to the community or other people in that they can just go off on their own and pursue these projects, can found these companies.

The one thing I would really change would be to get more women and minorities into startups. Women, gay people, trans people, all kinds of different people. I think that would be the most transformative thing in startup culture.

What's the problem with "cultural fit"?

This idea that someone is not a culture fit functions both during the hiring process and when people are already in the company. I know a number of women who have been turned down from jobs because they "weren't a culture fit." I know a lot of people, not just women, but it seems that women are disproportionally affected. "Not a culture fit" is used as a reason to turn people down for a job. Once they are there, it's a way of kicking them out of the culture.

People will say "not a culture fit" without having to define what that means. It's almost this sacred space which lets them uncritically reject people from the company or from the team. On the surface level it tends to mean "We just don't like you. You're different from us. We don't want to figure out how to work with you." "Not a culture fit" gives us a really easy way to disregard your experience and you as a person.

How can unicorns be destructive?

In our industry we put a lot of value on this myth of the brilliant individual contributor who is coming up with ideas pretty much in complete isolation. Oftentimes this type of person is very charismatic and carries a lot of social weight on the team. I think the way that you see the unicorn function is that they are not necessarily tied to the formal structure of the organization in the way that other people in it are, and they won't have the same set of responsibilities and ongoing obligations that other people in the team have, because we are giving them time to think and be creative and wander around coming up with ideas. Because these people aren't necessarily sharing in the everyday work of the team they have more time to come up with these things and everyone else is trying to hold the business together so they don't have that critical time and space to be doing inventive work.

Also when the very thing we are mythologizing is that it is this one person who comes up with something in isolation, you are excluding the very notion of a team. But everyone wants to participate and contribute to the creation of new products. So pretending that it's a certain type of person sets up this very negative mythology that excludes a lot of people who are interested in that type of work.

How can companies start to examine their own culture?

In order to get people to see what their culture really is, you have to give them tools that help them to break down and analyze and see what's going on around them, the same as we learn from studies of representation. You go to a movie. A lot of people just see the movie, but some people have been trained to see the hidden messages about race and gender. How do we educate people about these hidden worlds and hidden messages?

One of the most obvious examples is that I just wrote a piece about women in technology and I got a few women leaving comments like "I don't experience sexism in the workplace." It's highly unlikely that they are somehow magically immune to sexism, it's just that they don't have the tools to see and understand what's going on.

One way to start people down this path is to get people to study Pop Culture studies. Tools like how do you examine what's going on as far as power dynamics? What do characters in this movie or fairy tale have in common? There are huge fields of study around these topics. How can we bring those tools and skills into the workplace? Why aren't we applying our same approaches to measurement and precision and all these things that we are using to build software, that we are using to build companies? Why is culture completely untouched by those means of inquiry?

[Image: Flickr user Stig Andersen]

Weather News Sucks, Can Researchers Crowdsource Something Better?

$
0
0

Are you curious about your neighborhood's weather and tired of receiving general citywide reports that don't apply to your exact location? It turns out there's an app for that, and it uses the guts of thousands of smartphones just like yours to innocuously crowdsource up–to–the–minute weather about specific locations.

Weather researchers from the Royal Meteorological Institute of the Netherlands, Wageningen University, and MIT along with smartphone developers hashed out a plan to use the OpenSignal smartphone app to generate better weather reports. OpenSignal takes readings from your smartphone's internal thermometer, which is used to measure the battery's temperature and prevent overheating. By monitoring the temperature of thousands of phones running the app in eight global cities, researchers will able to calculate temperature averages within 1.5 degrees celsius (2.7 degrees Fahrenheit) of on–the–ground readings.

Current weather reports can be wildly inaccurate because they take data from just single source. Although these weather–measuring instruments are usually hyper–accurate, they only take readings from a single location, often the closest airport. This one measure is then used to generate a citywide weather report, even if the city is miles across. Moreover, few of these instruments exist outside of dense urban areas, meaning suburban and rural reports are even less accurate.

As more people use OpenSignal app, not only will temperature accuracy increase, but the readings will be localized (even in less populated areas) and up–to–the–minute, which can only make life easier for trend–reading meteorologists.

"The ultimate end is to be able to do things we've never been able to do before in meteorology and give those really short–term and localized predictions," said James Robinson, cofounder of OpenSignal, in a press release. "In London you can go from bright and sunny to cloudy in just a matter of minutes. We'd hope someone would be able to decide when to leave their office to get the best weather for their lunch break."

It's a little creepy to allow meteorologists to access to your phone's exact location and temperature, but OpenSignal says that of its 700,000 users, 90% opt–in to data collection. The large sample size is essential for eliminating outliers: When measuring temperature, for example, a phone might run hot while playing a labor–intensive 3–D game, so larger numbers allow for statistical accuracy. Unfortunately, the iOS versions of the app do not allow the widespread data collection that the Android permissions allow, preventing the collection of temperature data.

Newer smartphones have come out on the market with dedicated air temperature, humidity, and pressure signals. In response, Robinson and his team released WeatherSignal last May, which they hope to use to build a completely crowdsourced weather network, something that will probably pay more dividends than finding the perfect window to jet out for a lunch break.

[Image: Flickr user David Goehring]

The One Thing That Could Stop The Internet Of Things

$
0
0

Remember the quick–to–run–out batteries that powered the toys of your youth? The ones that eventually leaked and wrecked your precious stuff? They've largely been banished by years of battery tech innovations that have made batteries longer lasting. But even our cleverest battery tech may be a huge weakness in Internet of Things devices, especially if we're going to be embedding them in wallpaper, burying them in plant pots, or sticking them in the labels of supermarket products.

But there are, somewhat fabulously, some very clever innovations underway to get around this problem.

For starters, check out this innovation by scientists at Penn State and Rice University: The teams have worked out a way to make solar cells based on organic chemistry, which could lead to fantastically cheap and actually bendy solar power systems.

When you think of a solar cell, you're probably visualizing a large(ish), solid shiny gray object that's bulky, heavy and yet quite probably too fragile to drop. That's because typical solar cells are based on inorganic crystalline silicon, which is hard and somewhat glass–like to touch. Making them is tricky, expensive and requires precision.

A better solution is to use organic molecules to construct solar cells, because these can potentially be made more simply, requiring less precision and even cost. But there's not too much research into organic chemistry solar cells, since everyone's trying to make inorganic ones more efficient. And the systems that exist to produce organic solar cells tend to require some wicked chemical tricks with fullerene––an exotic carbon variant that's said to make it very hard to scale up for mass production.

So the Penn State and Rice researchers simply sidestepped using fullerene. Instead they have devised a way to get all the complex strings of organic molecules that would make up a solar cell to more or less assemble themselves into the right structure automatically. The trick has been to adjust the shape of the molecules in question carefully, so that when they're turned into a form of plastic they align by themselves and create a brand new way to capture solar power.

The resulting cells are just 3% efficient in converting light into electrical energy, compared to the brand new record of above 44% for silicon cells.

But, critically for the Internet of Things, the molecular design can be improved upon, and it's said to be easy to scale up for mass production...as well as being very cheap. This means that soon enough it should be possible to cover walls, windows, and perhaps sticky labels or even Band–Aids with tiny, flexible solar cells to power the wireless systems, processors, and sensors we're going to embed into them.

Failing that, a separate innovation from the University of Massachusetts and the University of Washington does away with the need for a battery altogether. The teams have invented a new type of e–paper display that gets its power completely wirelessly, and only when it's needed.

The trick has been to combine e–paper display technology, which only needs a jolt of electricity when, for example, the color of a pixel is being switched from showing white all the time to showing black all the time with NFC tech.

NFC systems operate by sending and receiving wireless signals over very short ranges. In some cases, such as in contactless smart subway tickets, the electronics in the card actually captures energy from the radio signals it's being exposed to by a transmitter––then it uses that electrical energy to briefly power its tiny embedded electronics. The new e–paper innovation, called an NFC WISP E–ink Display Tag, works on a similar principle. Essentially it sucks out more energy from an NFC signal than it needs to process the data signal coming to it, and it uses the surplus to change the pixels of the attached e–paper screen. Considering NFC tech is commonly used in devices like smartphones, this could lead to Internet of Things devices having tiny screens that only update when you need to see them, or other uses like dynamic store shelves or even product labels.

Expect more innovations like this to parallel the Net of Things revolution because they'll see wireless tech and smart sensors embedded in places you'd never want to stick a battery.

[Image via Flickr user: JD Hancock]

Viewing all 36575 articles
Browse latest View live




Latest Images