Quantcast
Channel: Co.Labs
Viewing all 36575 articles
Browse latest View live

How To Make Room For Experts In Your App

$
0
0

Two hundred years ago the French mathematician Pierre-Simon Laplace, a man famous for his pioneering work in the field of statistics, commented:

“... [The] Mind that in a given instance knew all the forces by which nature is animated and the position of all the bodies of which it is composed, if it were vast enough to include all these data within his analysis, could embrace in one single formula the movements of the largest bodies of the universe and of the smallest atoms; nothing would be uncertain for him; the future and the past would be equally before his eyes.”

Many readers will, of course, recognize this once metaphysical idea as the concept behind today’s recommender systems: You know, those “you enjoyed X, why not try Y?” suggestions used by online retailers like Amazon. By analyzing our past behavior--along with that of users who have expressed similar preferences (the so-called “K-nearest neighbor” algorithm for collaborative filtering)--recommender systems provide a neat and sometimes alarmingly prescient way of predicting what we are likely to be interested in--even before we have come across a particular item.

It would be a mistake, of course, to imagine that such technology is relevant only in the world of Internet retail. Technological development has long been linked with universities and academia; the first music recommender system was developed at MIT in the early 1990s. This trend continues today. Earlier this month, U.K.-based company Mendeley celebrated the connection by staging a mini-conference in London, England on the subject of Academic-Industrial Collaborations for Recommender Systems, with seven presentations delivered by eight different speakers that revealed a plethora of ways in which recommender systems are helping revolutionize the world of academia.

The Search For Accurate Academic Recommendations

FastCo.Labs also had the opportunity to speak with Mendeley’s Chief data scientist, Kris Jack, about the pioneering work his company is engaged in.

“Academics are busy people,” he says. “They require time to be able to carry out their research, connect with people, collaborate with colleagues, plan and stage experiments, and analyze the results. As they are doing this, our systems monitor the different signals that are sent out, in the least intrusive way possible, and build intelligent links between them so that we can make recommendations that are useful and make sense.”

It was the English mathematician and philosopher Alfred North Whitehead who observed that society “advances by adding to the number of important operations which we can perform without thinking about them.” In essence, this is the central concept of a recommender system. Unlike search engines such as Google, recommender systems do not require specific search terms to be entered; instead they glean from user data the information that is likely to be of interest and pull it to one side.

The impediment, of course, is working out what exactly that information should be. While customers in online retail can sometimes be grateful for whatever suggestions are made, in the case of academics, companies like Mendeley are building a discovery tool aimed at people well-versed in research. It’s the computational version of trying to sell water to a well.

“If you have a very experienced researcher you do not want to keep recommending information to them that is very fundamental,” Jack says. “If our system sees that users show a deep understanding of the academic papers which have had particular impact in an area, we want to be able to understand in as accurate a sense as possible exactly what it is that they are trying to research so that we can best model their informational needs.”

There is also the added challenge of avoiding filter bubbles. “The idea of serendipity is very important,” Jack says. “One of our real strengths, I think, is the ability to introduce a bit of novelty so that the information we are presenting is not the same as what researchers would necessarily have been able to access themselves had they done the searching themselves in the area they are working in. We want to be able to create links that are not obvious--and ones that would not have been able to be found using a traditional search engine.”

Forget Indexing Books And Articles--Let’s Index Ideas

According to the Internet, the phrase “So many books, so little time” can be attributed to none other than the late American musician and polymath Frank Zappa. From the earliest days of organized knowledge, philosophers, scientists, and anyone else with a vested interest in information have fretted about man’s inability to read and absorb every bit of data that is produced. This situation is only exacerbated by the Internet, which not only creates more information by making everyone into a joint author/publisher, but also opens up our ability to access whatever information is out there.

In much the same way that iTunes helped teach us that the correct unitary measure of music is not the album but the single, companies such as Mendeley are demonstrating that the proper unitary measurement for the academic recommender systems of the future is not simply books or articles, but ideas.

“Every research article that gets written is packed full of different ideas,” Jack says. “When people are reading documents we have the ability to drill down to see whether they spend more time on particular pages, or perhaps highlight certain sentences that are of special interest. Our recommender systems then allow us to say that of the papers people are reading in a particular network you might be part of, such-and-such an idea is the one that gets the most attention paid to it.”

The ability to operate at this kind of granular level not only allows research to be carried out quicker--saving the time it would take someone to scan through a publication for the one or two lines most relevant to their work--but also opens up exciting new possibilities. Imagine, for instance, that based on the ideas a recommender systems knows are of interest to you, or else the research goals you have specified, it could then match you up with other researchers for possible fruitful collaboration. Companies could similarly use this technology to recommend particular employees (in the way that Amazon recommends books, or Netflix recommends movies) as the person to speak to about a specific problem, thus stripping away unnecessary levels of bureaucracy.

More enticing still is the potential for cross-pollination across academic subjects. “It may be that there is a question in computer science, described using a certain terminology, that no one has the solution for,” Jack points out. “That exact same problem might be being addressed using completely different words in the field of physics--where they do have the answer. The question that we are dealing with is how exactly do we create these links so that we can understand that these two problems, despite being described in different disciplines using different words, are actually the same problem?”

Back in the 1960s, Xerox PARC researcher Alan Kay dreamed of transforming the computer into the “meta-medium” that would encompass every other media, from typing to video editing. Half a century on, recommender systems might be doing much the same for the gap between disciplines.

Other Highlights From Academic-Industrial Collaborations for Recommender Systems

Former Microsoft Xbox recommender system researcher Jagadeesh Gorla began the day by delivering a presentation entitled “A Bi-directional Unified Model,” in which he described group recommendations using a new probabilistic model based on ideas from the field of Information Retrieval, which learns probabilities expressing the match between arbitrary user and item features.


Double act Nikos Manouselis and Christoph Trattner then took to the stage to discuss the opportunities and challenges presented by academic-industrial collaborations: giving honest and candid reflections from both sides of the fence.


Heimo Gursch delivered his “Thoughts on Access Control in Enterprise Recommender Systems,” describing a system that enables employees within a company to effectively share their access control rights with one another, rather than relying on top-down authority to provide them.


Maciej Dabrowski discussed his recent work in a presentation called “Towards Near Real-Time Social Recommendations in an Enterprise,” which explained recommender systems that exploit semantic data from linked data repositories to generate recommendations across multiple domains.


Benjamin Habegger gave a roller-coaster ride of a talk in which he described the ups and downs of his most recent startup, reflecting on the mistakes made along the way and questioning his decision to work with academics during the process.


Finally, Thomas Stone presented “Venture Rounds, NDAs, and Toolkits” in which he described applying recommender systems to the field of venture finance, discussed the nightmare experiences with NDAs during his PhD, and offered an introduction to PredictionIO, an open source machine learning server.


Why Britain's New Porn Filter Is Doomed To Fail

$
0
0

David Cameron, Britain's prime minister, announced a number of initiatives against online pornography today, beginning with a nationwide filtering of pornographic content at the ISP level that consumers have to deliberately deactivate if they want to view adult content. It's all about protecting the public and stamping out child abuse.

Further, "extreme pornography" such as simulated rape is to be deemed illegal, and in collaboration with sites like Twitter--which will implement filters to prevent unacceptable pornography being shown on its services--there's going to be a push to prevent such images from being promoted online.

But the prime minister's move prompted some serious controversy online. Some are suggesting that the new regulations constitute censorship, and that there are already many protections in place to prevent abusive adult content on the web, including the "corroding" images that Cameron wishes to outlaw. The intimation is that Britain's new attack on porn is a gateway to censorship, and that the net it casts now could easily be expanded in years to come to cover other content deemed "unacceptable" to the state.

Also in the Internet's cross hairs is the system that the U.K.'s porn filters will use. It's said to be keyword based, with the Child Exploitation and Online Protection Centre drawing up "abhorrent" words that will be blocked in an attempt to curb pedophiles.

But this exposes Cameron's system to a number of problems that begin with the very choice of words themselves. In the U.K. this is typified by the "Scunthorpe problem." Scunthorpe is a town of about 72,000 people in the east of the U.K., proudly drawing its name from the older "Escumetorp," Old Norse for "Skuma's homestead." But the second to fifth letters of the town's very name could pose a problem to very blunt-edged porn keyword filters. Similarly, if one includes American terminology, the small Devon town of Westward Ho! (one of the very few places to include a "!" in its name) may also be under threat.

These two linguistic examples are a little lighthearted--though the thousands of people living there may disagree--but there are worse problems. Tumblr's recent attempts at completely dead-stocking adult content are a perfect demonstration of this: As part of its tech-led crackdown, Tumblr also included bans on keywords like "#gay", which was seen by many as a barefaced snub to millions of homosexual people around the world. Adult content is also not necessarily pornography, and as author Nick Harkaway demonstrated via tweet, he couldn't even access the Prime Minister's announcement because porn blockers prevented him from seeing it. Educational sites that include adult content could also be affected.

Finally, Cameron's suggestions ignore the ingenuity of the Internet. It wouldn't be hard for a developer to build a service that filters out blacklisted words from any website and serve it to British citizens without being caught by the filters. It would also be easy for folks determined to dodge the filters to coin new terminology that's one step ahead of the censors--much in the same way Chinese citizens use colorful terminology to talk about banned political topics. The censorship list would likely expand to include those new terms, but that means the censors would be playing a game of catch-up and that the porn filters might start accidentally filtering out so much meaningful content that they become useless.

No one would contend that an effort to stamp out online images of child abuse is a bad thing. In fact, it's admirable. But the PM seems to be planning to use some very blunt tools to do the job, and that risks damaging his entire system.

[Image: Flickr user Vaughan Leiberum]

What Facebook Learned From Building 3,000 Apps For “Dumb” Phones

$
0
0

Facebook isn’t taking recent news that it’s losing subscribers sitting down. On the contrary, the social giant is trying its hand at tapping the largest market in the third world--$20 phones.

Called Facebook for Every Phone, one in every eight phones on the planet logs into Facebook on one of the initiatives on over 3,000 models of feature phones (pre-smartphone models--think Motorola’s Razr and its clamshell peers). Phone carriers and manufacturers subsidize users’ data usage, allowing low-cost (if not free) news reading and photo sharing.

While Facebook has just begun to introduce these apps, its investment in the foreign markets of India, Brazil, Vietnam, Indonesia, and Mexico is a shrewd move in growing markets: Potential users in the developing world are eager for social media interaction but cannot afford $600 smartphones or $40-per-month data fees.

The Every Phone project began in 2011 after Facebook’s acquisition of the Israeli startup Snaptu, whose team quickly set out to re-engineer the Facebook interface and data transmission to work on very slow cellular networks. They also focused on optimizing the company’s mobile apps to display chat and photo features on feature phones with much lower processing power than today’s smartphone powerhouses.

The investment is already paying dividends. The Facebook interface for slower feature phones is five to ten times more efficient than the company’s smartphone apps, and efficiency improvements from the lower bandwidth versions are already making their way into the company’s flagship smartphone apps. As the user experience becomes more streamlined, so too does the pipeline for advertisements in the foreign mobile market. Although the ad platform on feature phones is not nearly as profitable as the one on smartphone apps, improving it will only pay dividends as Facebook continues to scale its global audience, phone by phone.

[Image: Flickr user Johan Larsson]

Inside The Tech Stack Digg Used To Replace Google Reader

$
0
0

When the team at Digg learned about the impending demise of Google Reader, they knew they had to act fast--and build something killer. The resulting void would leave millions of potential users on the hunt for something new, and the recently Betaworks-acquired Digg was unusually well-positioned to build a product dedicated to reading things. But with a limited time frame in which to create Digg Reader, the team had to limit the product's scope and choose its tech tools wisely.

Leading the charge on the technical side at Digg is CTO Mike Young, who gave us a glimpse behind the scenes at the tech stack and development frameworks used to replace a beloved, eight-year-old service in a matter of months.

Keeping Things Scalable With Amazon's Cloud

In 2013, rolling one's own infrastructure might be a laudatory technical feat, but for most, not worth it when super-reliable cloud hosting is so readily available. Not surprisingly, Digg is leaning on Amazon Web Services (AWS) for its hosting and a few additional infrastructural needs.

"We currently have a mix of infrastructure/services that we built ourselves over the last year for Digg.com as well as some of the AWS-hosted services for storing data (DynamoDB) and queueing (SQS)," says Young. "Since we had such a short time frame to build Digg Reader we had to lean heavily on some of the hosted AWS services, like DynamoDB, versus rolling our own."

Here's a breakdown of what they're using:

  • Amazon Web Services for hosting and content delivery. Young says they're fully hosted on AWS, running most of Digg Reader off of instances of Amazon's Elastic Compute Cloud (EC2).
  • DynamoDB. Young's team uses Amazon's NoSQL database solution rather than building their own "since we had such a short time frame to build Digg Reader," he says.
  • Amazon’s Simple Queue Service (SQS) is used for queuing messages between components of Digg's Amazon-powered backend. For those unfamiliar with message queues, Wikipedia offers a pretty thorough primer.
  • Amazon Route 53 is used for DNS management. The service doesn't offer domain names, but rather lets users control the DNS settings then map the domains and subdomains to specific IP addresses and other standard DNS settings.
  • The AWS Elastic Load Balancer is necessary for sites expecting as many sudden visitors as Digg Reader was. It smartly distributes all that inbound traffic among Digg's EC2 instances for maximum fault tolerance and stability.
  • Amazon's Simple Storage Service (S3) and Cloudfront CDN are used for storage and delivery of images, JavaScript, and CSS.

"I don't think we would have been able to pull this off in the time frame we had without AWS and the ability to scale up a large number of machines in short period of time," says Young.

More Fun On The Backend of Digg Reader

"One of the things that was so amazing about Google Reader was how fast it was, both in serving up content when you loaded the page or paginated through feeds, but also on the feed aggregation/crawling side," Young says. "We spent some time talking with the original Google Reader team when we first started the project, and they were kind enough to give us some great insight into the product and infrastructure."

In addition to the Amazon's hosting and services, Young's team cobbled together a number of other backend engineering tools and techniques to get Digg Reader to run as quickly and responsively as possible.

"The backend is written in Python and is pretty standard in terms of the stack that we use," he says.

Here's a breakdown of the other backend tools they're using:

  • Python is the backend programming language of choice at Digg. The general purpose scripting language is used by everyone from Dropbox to NASA and is generally quite popular, so this is no surprise.
  • Memcached, redis, and twemproxy are used for caching. Redis and memcached are popular open source key-value stores designed to speed up web apps by lightening the load on databases. Twemproxy is a light-weight proxy for memcached and redis created by developers at Twitter.
  • MongoDB. In addition to Amazon's DynamoDB, Digg uses the uber-popular NoSQL database system MongoDB.
  • Tornado is the Python Web framework and networking library of choice at Digg Reader. Originally developed by FriendFreed, Tornado "can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user."
  • Beanstalkd. To keep processes moving in the background, Digg uses the work queue Beanstalkd to run tasks asynchronously and keep the user experience as speedy and interruption-free as possible.

"For code deployment and server monitoring, we are using a mix of tools right now: fabric, statsd + graphite, Amazon's Cloudwatch, sentry, munin, nagios, and pager duty," says Young. "And Chartbeat, of course."

"Gilad Lotan, the Betaworks Data Scientist, has built a system that allows us to score any URL based on a number of signals like tweets, Facebook shares, Diggs, etc.," Young explains. "This is currently shown in the list of 'Popular' stories in Digg Reader. He uses a mix of redis, memcache, zeromq, and hypertable."

Styles, Libraries and Frameworks: Making the Front-End Shine

The front-end of Digg Reader is, as Young puts it, "built with Javascript, CSS, and a lot of love."

"I'm really proud of what [design director] Justin Van Slembrouck and the dev team pulled off in such a short amount of time," says Young. "There is a lot left to do, but it's really exciting. We are really just getting started with this."

One of the team's biggest challenges was ensuring the Reader experience was as fast and responsive as possible. This isn't easy when you're aggregating and storing more than 8 million feeds and doing all kinds of heavy-lifting on the backend to make everything feel effortless. Thankfully, for the front-end guys, there's no shortage of frameworks and tools to help patch together something that meets users' not-always-patient demands.

"Jon Ferrer and Kevin Barnett have done a great job in making the site feel very responsive," says Young. "One of our big goals for launch was to make the "All" feed feel very fast for users. We wanted users to be able to scroll through page after page of their "All" feed and have it feel very smooth. Jon is using some tricks in terms of loading and then showing the data to make sure the scolling and pagination of the All feed feels smooth."

Here's a breakdown of what powers the front-end:

jQuery, the wildly popular JavaScript framework, is used by the Digg Reader team to simplify scripting tasks and just generally make life easier.

Backbone.js is huge. The RESTful JSON Javascript library is used on a wide variety of huge sites and web apps from Pandora and Hulu to LinkedIn and Pinterest. Digg joins the long list of prominent web properties that use it to make script-wrangling easier.

Require.js and r.js optimizer are used to load and manage dependencies between JavaScript modules.

Uglify.js is described as a "parser / compressor / minifier / beautifier" for JavaScript code. In other words, it takes often unwieldy-looking JavaScript and crunches it down into something cleaner and lighter weight.

Lo-Dash is a low-level utility library for JavaScript developers that enables customization, improves performance, and offers additional features. It's similar to underscore.js, which Digg also uses for JavaScript tempting.

LESS is a compiler that helps developers extend what's possible with Cascading Style Sheets (CSS), bringing a programmatic flavor to the code that powers how websites look and feel.

Node.js and Git pre-commit hooks are used to build out Digg's production CSS and JavaScript from their source file.

"Other than that, it's pretty much just GitHub, Asana, Hipchat, and code editors," says Young. "The old guys (like me) use Vi while the younger guys use fancy new tools like CodeKit and Sublime Text that I think pretty much write the code for you! Kids!"

[Image: Flickr user Johann Snyman]

Weird “Hologram Concerts” Allow K-Pop Artists To Transcend Space, Time

$
0
0

After HoloTupac’s unveiling at Coachella 2012, fears abounded of money-vacuuming Elvis and Jimi revival tours--but the real concert future, according to Korean agencies, lies with globally broadcasting stereoscopic holograms of living groups to satisfy K-Pop fans.

The move toward broadcasting holographic performances would cut down on the immense overhead costs and scheduling nightmare of shipping talent groups around the globe. Two talent agencies called Success Museum Entertainment (dubbed SM) and Yang Goon Entertainment (sobriquet: YG) have kicked off the movement with their own experimental theaters: SM has plans to open a “V-Theater” tourist attraction in August, and YG has already opened “K-Pop Hologram” in the Everland theme park in Yongin, Gyeonggi Provence. The latter is the first of a campaign to open 20 such theme park venues in China, Hong Kong, Singapore, North America, and Europe by 2015. The series will feature many of the agency’s artists--including the debut of a PSY hologram concert:

SM has been experimenting with hologram projection technology for over a decade, and they’ve already begun integrating holographic performances into their groups’ tours--like girl supergroup Girls’ Generation, whose June 8 and 9 concerts at the Olympic Stadium featured a test of the technology (to say nothing of their “V Concert” hologram performance last January at Seoul’s Gangnam Station).

Of course, this holographic cultural invasion could fall flat as K-Pop fans clamor for their artists in the flesh--the only precedent for concert-level holographic performances are either with dead performers (as in Coachella’s Tupac and Celine Dion’s duet with Elvis) or entirely fictional artists (as Japan’s Vocaloid idol, Hatsune Miku). How will the value of live performances change when fans can see digital reconstructions of their idols?

4 More Ways To Monetize Your Music--Without Spotify

$
0
0

The hardest part of making music is not crafting memorable melodies, but justifying the time involved by making some money. Since streaming services have proven they can't pay the bills even with millions of plays, we came up with four services to help you monetize your music. Here are four more:

Ustream

Live concert streams aren't rare, but charging for them hasn't become commonplace yet. This space is ripe for artist exploration with the potential of plenty of profits. There are a few different services specializing in music streaming like Vyrt or Evntlive, but Ustream.tv is probably the most mature at this time. Artists, put on a virtual tour and save your fans some gas money.

Chirpify

Their name implies Twitter, but Chirpify allows artists, as well as other merchants, to sell both digital and physical goods over most social networks. The idea is to bring the store to your followers, having them reply with a specific word like "buy," then allowing the automated transition to take place in the background. In addition to possibly leveraging a huge social following, Chirpify also only charges 5% plus $0.30 per transaction, with enterprise pricing going down to 2.9%. Gumroad is another very similar service which charges 5% and a $0.25 trisection fee.

Soundslice Pitch Perfect

Sell your music again, in a different form. Soundslice is starting to make it possible for artists to sell tabs of their music for other musicians to play along with, exactly as it should be, without a lot of guessing. Soundslice offers a pretty amazing tool for guitarists which syncs the chords and tabs to a song, even allowing you to play at half speed with no pitch loss. Opening up the opportunity for artists to charge is a nice touch. This may be thought of as a niche product, but as of 2011, sheet music was a $2 billion/year market. Plus, it only takes a few seconds into the Soundslice demo to convince most people this is an amazing service.

PledgeMusic

Sure, Kickstarter is great and works well, but PledgeMusic is an entire site dedicated to crowdfunding tours, albums, and gigs. Artists like Slash, Minus The Bear, Lissie, and other top-tier names are all currently using the site to fund their future music. The site makes it easy for listeners to browse current projects, but also lets artists make campaigns profitable even if the goal doesn’t quite hit the target.

[Image: Flickr user Bob Mical]

A Dozen Apps For The Connected, Smartphone-Wielding Musician

$
0
0

The question was simple, "What app(s) do you use with regards toward your music career?" The answers were unpredictable at best as a range of artists looked down at their phones and electronic devices to remember what they currently consider irreplaceable.

Keith Goodwin of Good Old War

“Love NanoStudio while we are on tour. It's a good way to keep busy working on music while in a bus or a van. The songs I make in this app are more like exercises than songs for Good Old War. It’s good to keep the creative juices flowing. It’s a good way to work on my programing skills. Without it there isn't much to do besides listening to music and reading.

I use GarageBand on the go whenever an idea pops in my head or I'm playing piano or guitar and I have an idea I'd like to remember so I can work on it later.

Logic Remote is fun for when I'm at home and set up in my room. It controls the Logic program on my computer and allows me to lay in bed and mix. Lazy man’s mixing.

Animoog is a cheap way to get cool Moog synth sounds.”

Stephen Christian of Anberlin

“Here are a few apps we use on the road/in our career. Its nothing new but Yelp is a favorite--when we wake up in the morning we usually have no idea what city we are in or if we're around anything worthwhile. Yelp's filters usually solve the mystery of finding good coffee or food close by. We have to walk everywhere when we get off the bus, so distance is a factor.

The biggest asset for touring is Master Tour from Eventric; it’s an app that our tour manager uses to inform us of set times, sound checks, interviews, and more.

TuneIn radio is useful for keeping up with our favorite podcasts no matter where we are in the world, and music apps like Spotify and Pandora are constantly being used backstage.

As far as recording I don't use anything extravagant, the voice recorder app is perfect for throwing down melody lines that strike me or a guitar riff that I just don't want to forget. To date I have 157 different messages to myself, the majority of them just a few lines of lyrics or a melody line that I will go back and listen to for inspiration when the writing process begins.”

Josh Ocean of Ghost Beach

“We use the DM1 drum app for a lot of drum programming in the studio. Also the Animoog synth app offers fun sounds with big Moog quality.”

Matt Wertz, solo artist

"GuitarToolkit is my best friend. Not only is it an awesome guitar tuner, it also has settings for banjo, mandolin, bass, and pretty much anything else with strings. On top of that, GuitarToolkit has a really awesome chord reference feature that shows how to play chords with nearly every voicing imaginable, and a metronome that comes in really handy too. It's on the front page of my iPhone."

Robin Hannibal of Rhye

"I use metronome and tuner apps since they’re fast and efficient." [Here are five of the best metronome apps.]

Ryan O’Neal of Sleeping At Last

“Some of my all-time favorite apps are: SoundPrism, a unique instrument by Audanika that I've used quite a bit on recent recordings. I, of course, also love the voice recorder app which comes standard on the iPhone, a vital writing tool for me.

Day One is a beautiful journaling app where all of my lyrics are collected and GuitarToolkit has been with me since the day it was released. It's responsible for tuning every guitar and ukulele of mine on every record over the last few years.”

Andy Zipf of The Cowards Choir

"My team and I use IFTTT (If This, Then That) to make announcements on all fronts, all at once. For instance, when I upload a new track to SoundCloud, it goes to my personal Facebook page, Facebook music page, Tumblr, Twitter, and my website. It also hits the second-tier social sites that are synced with my Twitter feed. With the new IFTTT app I could feasibly announce a new release on iTunes from my iPhone and have it sent out to all corners of my online identity while sitting in the parking lot of a Super 8 in Tuscaloosa, AL."

Sean Scanlon of Smallpools

"We like to use Snapchat to keep in touch with fans/friends we meet while on the road. You never know what to expect every time a new alert comes in so it helps keep the long drives interesting."

Adam Young of Owl City

"I use the Dictionary.com iPhone app when I write lyrics. Using a thesaurus is vital to my writing process so having the ability to reference words on the go, without internet is a must. So often a random word will jump out at me and provide inspiration. I can't work without it."

[Image: Flickr user Alan Levine]

Is Siri Paving The Way For Immersive Audio Gaming?

$
0
0

We like the idea of experimenting with storytelling around here. We’ve tried it with slow live blogging and we’ve tried it with audio. We’ve covered holographic K-pop concerts and strategy for startup blogging. So you can only imagine our intrigue when we came across something called Codename Cygnus--an interactive radio drama.

Choose Your Own Adventure--By Radio

Radio dramas recall images of Orson Welles stories being broadcast out to the masses in their living rooms, huddled around a cabinet-sized AM receiver. Codename Cygnus is technically a part of that tradition: Part casual game, part interactive audiobook, it’s immersive audio storytelling, lacquered in the delicious intrigue of an old-time spy drama.

In a world of ubiquitous Siri copycats, Codename Cygnus seems archetypal of a new way of storytelling--in which listeners become participants and the whole story is bespoke to the user. We’re all accustomed to interactivity on the web, but broadcast audio? That’s the original one-way medium. But not anymore. The benefits of immersive audio narrative--low overhead, portability, and a commute-friendly format--might be enough to make this style of gameplay stick.

In This Narrative, You’re Free To Be You

Reactive Studios is banking on an experience that sits comfortably between button-mash gaming and passive audio entertainment. Jonathon Myers is the game’s creator. Myers’ experience working on mobile games (along with playing his fair share on the T-line commute through Boston) got him thinking about an audio program that would be more immersive than the NPR stories we’re used to.

But what about the main character? How can you be fully immersed when the protagonist is (say) a burly hitman and you’re a teenage girl? In Codename Cygnus, gender pronouns directed toward the player are stricken from the game--in the main storyline, the player is always “The Agent.” Such subconscious reinforcement of the player’s imagination drives it further from the realm of audiobooks, but Reactive Studios didn’t stop there: In their fidelity to the spy genre, one of the most beloved tropes--mysterious, high-stakes romance--is prominently included, with players given the choice to become involved with whomever romantic options are germane to the plot.

“You can pursue who you want to pursue,” Myers says.

And, of course, the game was built around choice: try to talk the thug down or tackle him? Such choices affect both the story and the player’s stats: Myers describes it as “RPG-lite,” building the player’s character behind the scenes with zero bops or HUD indicators in the game’s quest to eliminate or streamline any UX mechanics that break immersion.

The Most Accessible Game Ever?

A narrative designer and writer with experience as a playwright, Myers pitched the idea for Codename Cygnus during a meeting of the , a roundtable for indie game designers. Not only did the idea have legs among his peers--they threw dollars in Myers’ face and told him to “make it now, take my money,” he says.

Myers conceived of the game after finding himself using his phone for entertainment during his commute. He preferred pick-up-and-play experiences with negligible learning curves. An audio-based adventure game would be one of the easiest, he figured.

Not many people are trying experiments like this one. Interactive narrative, as Codename Cygnus might be categorized, has slim pickings in today’s game market. The world is historically awash with text-based choose-your-own-adventure books, but Myers found precious few games that shared his team’s vision.

One ally was Dan Brainerd, whose choice-based progression in the Steam game Monster Loves You mirrored Codename Cygnus’s choose-your-own-adventure-style narrative flow. Still, the game was visually based--a different paradigm. Myers turned next to Naomi Alderman for inspiration, writer of Zombies, Run!--an audio narrative-based game broken up into “missions” that pop up with updates between songs during your workout. Codename Cygnus will likely come with a few illustrations, but as it stands, the voices, choices, and the narrative will be the bulk of the gameplay experience. Once up to speed, they wish to put out a 15-20 minute episode per week.

The game will be available for episodic purchase in late August, and features voices like Logan Cunningham (narrator of Bastion) and Sarah Elmaleh (of Skulls of the Shogun). Reactive Studios has established a Kickstarter to pay off the professional sound effects library, tech licenses, and plug-ins that made recording of the first episodes possible. Better music, voiceover brushups, and polish are also on the list. As of press time, the Kickstarter had reached one-third of its $11,000 target.

Even before this polish had been funded, Myers took a demo around at last March’s Game Developer’s conference. Wandering around the floor, Myers wrangled anyone he could to try the rough beta version of Codename Cygnus. At first, they were perplexed. And then he pressed play.

“When I put the headphones on their ears, I just saw this smile on their face,” Myers said.


Why Your Software Should Act Like People

$
0
0

Have you ever gotten angry with your iPhone or given a name to your car? Sociologist Clifford Nass has spent his career investigating how people interact with technology and concluded that we subconsciously expect software to respect the same social rules as other people. Software is a social actor, and developers need to design it to respect social mores.

Naas was studying Computer Science when he took a Sociology course, thinking that it was the easiest way to gain the credits needed to finish his degree. Soon he realized that a sociologist who knew how to code could do new types of research. As he puts it: “Galileo was one of the first people in the world to have a telescope. He was bound to discover something.” FastCo.Labs sat down with Nass to learn his philosophy on properly socialized code.

What do developers need to know about the social rules we apply to technology?

To a remarkable degree people approach technology using the same social rules, expectations, and heuristics that they use when interacting with other people. Very frequently technology behaves very badly from a social perspective. When any interface is designed the simple question that people should ask is “How would I feel if a person did this?” and it's remarkable how often you would find that you would do things very differently if you just ask yourself that question.

Can you give me some examples of software behaving badly?

Almost all of it. If you do something and someone says to you “That's wrong!” without giving any guidance or help, that would be considered extremely impolite and uncaring. In a lot of query or help systems, you type in a query and it gives you some answers. If you type the same request again, it's implicit that you don't want the same answer, and yet that is what software will frequently do. No matter how many times you ask it will give you the same answer. That comes off as passive aggressive.

What about newer personal assistant systems like Siri?

The voice interface increases the power of social response. There are some things that Siri does well in the social realm, some things not so well. One problem is that when Siri doesn't understand something, it blames itself. It says “I didn't understand that or could you repeat that?” It turns out to be better to not blame yourself but instead to ignore who is to blame or even to blame a third party. For example, “There was noise on the line. Can you repeat that?”



What techniques should developers use to make people like their software more?

We don't get much praise from technology. That's a significant omission. In the language space as technology starts using full sentences rather than simple words and commands there are plenty of opportunities, for example, mirroring the person. We love people and technology that mirrors us. Here in the Bay Area we can San Francisco The City. Hardly anyone says San Francisco. So if I use the phrase “The City” the software should not only understand that but use it back at me. That's what humans do to each other to indicate caring. People like to feel that they are being cared for, that the technology cares about them.



We talked in a previous interview about social prosthetics--software to make us more likeable. Do any such systems exist?

We are getting closer to the social prosthesis of technology giving advice. Google Glass is a great opportunity for this. It could refer me to information about you. It could remind me of things you did that I should refer to, not just trivial things like “It's her birthday” but more complex things like “She really likes talking about such and such so you should discuss that.” Because Glass is constantly available, you wouldn't have the awkwardness of pulling out a phone. The whole wearable movement allows the technology to be a prosthetic in a much more powerful way because it's seamless.

What about interaction with robots, which seem to be headed for the mainstream as consumer products?

Robots up the social ante. When you have a body, that suggests humanness. You have higher social response and social expectations. A piece of software that violates a social rule is not as annoying as a robot that breaks a social rule. Failures to do it right have much greater penalties. For example, if the robot is small it has to adopt a subservient role. The language it uses has to be subservient, its voice has to be subservient. It's very bad to have a robot which is small and diminutive but then speaks in a rather aggressive way. It's very very important that the voice, language, etc. of the robot conforms with its size, its appearance, and whether it looks male or female.

You once said that technology companies shouldn’t hand out T-shirts since it interferes with team bonding. What social strategies can companies use to build teams?

In all societies there are markers of belonging. Having things that say “we are bonded, we are similar, we are tied,” is very important. Most groups will have inside jokes, or things they say or certain ways of doing certain things. Those all exist to create a feeling of bonding and that's exactly why companies have T-shirts. It's supposed to remind them that they are part of a team. However, If everyone has a T-shirt it's no longer bonding. If you share your inside joke with everyone, it's no longer an inside joke and it therefore loses its power and meaning.

To create a team you need two things. You need identification, clear reminders that we are part of a team, and dependence--the notion that we need each other. Incentive structures that reward individuals versus other individuals work against that. You want to emphasize similarity, shared interests, and a shared stake in outcomes.

Having people praise each other. Cross-supporting praise is very powerful. The problem is that we think that critics are more intelligent than people who praise. We like people who praise more but we think our critics are smarter. So the trick to make this work is to develop these mutual admiration societies. I praise you so you might think I am not that cool, but then a third party says “Hey, Cliff is smart.” That's how you do it.

[Image: Flickr user Noli Fernan "Dudut" Perez]

This Google Glass Porno Flick Foreshadows Your Future Sex Life

$
0
0

Earlier this year, after Google announced Google Glass, a porn company called MiKandi immediately released the first porn app for Glass. Called Tits & Glass, it was the Instagram of homemade porn, allowing Glass users to shoot, share, and vote on porn videos. Immediately afterward, Google decreed that no porn apps will be allowed to run on its Glass eyewear, and MiKandi was forced to pull the app. Now the company is trying again--and there are implications for everyone.

For all their policies, big tech companies can’t keep porn companies--notorious early adopters--completely off their platforms. And experimentation in the porn industry has a way of finding copycats in bedrooms all over the world. When this technology democratizes and imitators begin springing up, the connection between flirting apps, dating apps, hookup-finders, and virtual sex-bots will begin to coalesce, creating new, strange and shockingly intimate ways to fool around with other people--whether they’re in the same room or not.

Follow us down the rabbit hole that is the future of porn, and you’ll find yourself implicated in ways you ever imagined.

This time around, MiKandi (which runs an Android-based porn app store) has contracted porn superstars Andy San Dimas and James Deen to be the world’s first couple to shoot an adult scene using Google Glass. It may not be the most romantic story, but it’s a tale of an Internet business trying to keep up with trends, even as the platform makers find more ways to single them out for exclusion. And when you stop and think about what these companies are inventing, you get the feeling that this experiment is an uncanny extension of the apps that we use in our “normal” dating lives today, like Tinder or Chatroulette.

Is this what the future holds for all of us?

Tech And Porn Have Always Been (Secret) Friends

Despite what Google may wish, porn and technology have always gone hand in hand. Plate-glass negatives still exist from porn pictures taken with cameras circa 1900. Same goes for film shot on celluloid: Pornographic movies were being made side by side with the silent films of the 1920s. In the 1980s, it was home video porn that in part helped to make VCRs so popular. And in the 1990s some of the staunchest backers of the transition from VHS to DVD were pornography companies.

In 1994, when AOL had the dream of getting the Internet into every home in America, porn companies dreamt of streaming porn right to your computer whenever you wanted it. Not only does porn love technology, it loves the latest technology--even before most consumers do. And today, the latest technology is wearable tech, which offers the ultimate in vicarious pleasure: radically lifelike point-of-view porn.

POV is a trick used in both Hollywood filmmaking and adult filmmaking to let the viewer feel like the camera’s eyes are theirs. The problem is, no matter how skilled the Hollywood cinematographer or the porn videographer, traditional POV shots don’t have a true lifelikeness because the camera is still too big to get intimate, close-up line-of-sight shots as we would see with our naked eyes. Glass changes that. And while the quality of Glass-shot video isn’t good enough for Hollywood blockbusters, it’s more than good enough for porn. (And the occasional NCAA basketball star, documenting POV celebrity experiences.)

And that’s why MiKandi CEO Jesse Adams jumped at the chance to shoot the world’s first Google Glass porn as soon as he got a pair of the high-tech specs.

If Your Glass Porn App Is Banned...Use Glass To Shoot Your Porn

MiKandi is the world’s largest adult app store. Launched for the Android platform in 2009, MiKandi now offers over 8,000 adult apps, generates over a million downloads a month, and has over 4 million users--with another 3,000 users being added each day.

Adams tells me he started the store with a few friends from Seattle who had experience in both the mobile and the pornography industries. “We knew we wanted to build something together that combines sex and tech, so when the idea of an adult app store came up, it was just one of those great ideas with huge potential that we had to jump on. One of our cofounders also had foresight to predict the rise of Android way before anyone really cared about it, so we made a smart bet on developing for this platform early on.”

But despite Adams’s embrace of Google’s then-fledgling OS, Google hasn’t been so embracing of Adams’s first endeavor onto the company’s new Glass platform. “When we first announced that we received our Glass device and were developing an app for it, we paid careful attention to Google's Glass Platform Developer Policies,” Adams says. “At the time, there were no restrictions on adult content, so we went full speed ahead on our adult app for Google Glass. The weekend before we officially announced our app, Google quietly revised their terms to prohibit sexually explicit material on Glassware. As soon as we learned of this update, we voluntarily disabled our app. Although we acted swiftly and made changes to the app to be compliant with the new terms, Google refused to re-enable our app.”

When I ask Adams why Google didn’t re-enable his toned-down Tits & Glass app that complied with Google’s new guidelines, he says he’s still not sure. “We called Google several times to ask questions about the ban,” Adams says, “but it was clear that the rep on the phone wasn't allowed to say much to us (it was clearly scripted responses drafted by their legal team). He could neither deny or confirm that we were banned or address any of the specific reasons about what caused the ban and what we could to resubmit and get our app approved.”

“But,” Adams adds, “while developers cannot share Glass apps that serve sexually explicit content, there doesn't seem to be any terms that prohibit users from shooting their own adult content.” So that’s exactly what he did next.

Shooting Porn With Glass Is... Different

Actress Andy San Dimas says there were plenty of differences shooting porn with Glass versus traditional methods. “Oh God, it was super awkward,” she laughs. “When I saw the footage of me trying to get a good shot...while being shot...” Then she trails off. “I also looked really unattractive in them.”

But professional porn can be almost acrobatic in ways; things can get rough. Wasn’t she worried about the Glasses falling off while in action? “Nope,” she says. “It was all softcore, though. A lot of times I even forgot I was wearing them because they were so light.”

Since the actors are now the cinematographers, Adams says they had to learn to be more aware of their movements. “There is no zoom function, so zooming in literally means putting your face up close in the action--and by action I mean pink parts,” he says.

“What I thought was interesting was to see how the performers reacted to using the Glass. At the end of the day, the performer wants to deliver a great show that's authentic and sexy, so wearing the Glass was distracting at first--make sure the other person is feeling good, make sure they look good, make sure the Glasses are still on and recording, make sure they are using the Glass correctly, et cetera,” he says.

Glass May Be The Future Of Your Sex Life

So Glass is easy enough for porn stars to use without training--what about normal people?

“We all viewed the videos on the Glass--[they are all] stored on the device--and everyone at the shoot commented on how enjoyable it was to actually view the porn we just shot on the Glass itself,” Adams says. “And although from the performer's POV it might have felt awkward at first to use Glass, from the viewer's POV, the videos definitely feel very intimate and personal. It's a shame that [porn apps are] banned, so other Glass users can't experience that yet.”

That begs the question: What happens when Glass gets cheaper and more democratized? What happens when people “jailbreak” it? Or if Google ever lifts its Glass porn ban?

Adams thinks it may lead to a different kind of future for couples’ apps. “[Glass] is a communication device, so I see couples sending each other flirty teaser videos and messages throughout the day. Or perhaps your favorite pornstar/live cam model will notify you when they’re online or send you quick screenshots of their live show. Adult games that encourage foreplay with couples throughout the day would also be a lot of fun on Glass.”

That would change everything for couples used to apps like Pair or flirtatious utilities like Snapchat.

“I think the future of porn will be completely immersive virtual experiences that will allow anyone to act out any fantasy,” he says. “People will be able to have sex with anyone they can imagine. Likewise, couples can use the same technology to have sex with each other anytime anywhere (having sex with the virtual you or the real you controlling the virtual version). There will be sex toy-like robots that can react, listen, and give you the exact pleasure you want to enhance these virtual experiences. These devices will understand and measure your arousal levels, body fluids, your breathing patterns, your body movements. They will also learn and adapt and react to your feedback.”

If that isn’t immersive enough, Adams says feedback is next. Imagine what devices like the Myo armband could do if hacked for pleasure. “I think the biggest innovation here will be two-way sex toys that can send and receive feedback. We already have devices like Leap Motion that can track your fingers and detect complicated finger and hand gestures,” he says. “Imagine 10 years from now where this type of technology will allow someone to give you a handjob virtually. Not only will you feel her hand squeezing and moving as if she's right there touching you, she will also receive tactile feedback that your penis is getting harder and stronger and that you're enjoying her touch.”

And there you have it folks. A virtual old-fashioned might just be in your future--if Apple and Google don’t stop them first.

[Photos courtesy of MiKandi.com]

The Tumblr Community Reacts To The Ban On Gay And Lesbian Search Queries [Video]

OK, Google: Does Voice Control Really Make Sense On A Smartphone?

$
0
0

Google's erstwhile smartphone manufacturing business Motorola Mobility is busy unveiling a bunch of new smartphones right now, but there's already one big takeaway from the three new Droid phones that will be hitting Verizon later in 2013: You'll be talking to these phones a bunch, as well as talking with them.

That's because these phones will respond to commands beginning "OK Google Now..." in different ways. For example, "OK Google Now, call my Droid" will call your phone for you so you can locate it under that pile of dirty clothes on your bed. You'll also be able to wake up your device from a sleepy state with the "OK" command, and pull off a number of other features we don't know about yet. It basically means that like Google Glass, which is voice commanded with the "OK Glass" command, the Droids are constantly listening out for your commands.

That tech may,or may not freak you out all by itself. It's convenient, sure. But it also is a bit...odd, don't you think? We're pretty convinced that soon enough we won't think it odd when we overhear folks chatting at their devices in a public setting (just as we got over the whole "gabbling madly to themselves" issue when headphone mics became standard phone accessories). But the whole "OK Google" thing is quickly going to get annoying. Thank goodness it wasn't "pew pew" or "go go!". Yuck.

But since voice tech is expanding, this news really makes us wonder what on Earth would Apple use as an activation phrase for its legion of iDevices when it makes Siri cleverer? "Dear Siri," or perhaps "Darling Siri"? "iPhone ON"? Will we hear "Computer: Tea, Earl Grey. Hot." as a command spoken to some future Mac? Would Apple go for something simpler, classier, less odd like a plain ol' "Hello"? Or something tapping Jony Ive's heritage like "Ol' chap"?

As a developer this sort of voice activation future is looming over potentially every app you create. How will you mitigate the weirdness of using a keyword activation system like this, or can you already think of clever ways to capitalize on it?

[Image: Flickr user Brennan Schnell]

What You Learn About Google When You Buy Glass

$
0
0

On a Friday afternoon, as a Google Glass concierge offered me champagne, I asked him if Glass had ever appeared in his dreams. I was undergoing the Glass treatment as a friend's plus-one (or, in Google parlance, +1) and I wondered just how deeply this was going to penetrate his subconscious. The concierge laughed and nodded: Once he had dreamed that he was being attacked by wild turkeys, and in the dream, he had reached to his face to take a video. The dreamed turned dystopian when he realized he wasn't wearing Glass.

The process of buying Glass in New York City is built to feel artisanal, down to the location, which is in a loft space across the street from Google's 14th Street office. You go to Chelsea Market to buy some organic yogurt. Then you go upstairs to pick up Glass. Everything is right with the world.

Except it doesn’t make sense. The artsy, backlit Glass sign, the fancy visitor badges, the wet bar, the kiosks made of minimalist scaffolding where you experience Glass for the first time. All of this is so far from the crazy niche experiment that is Glass. There is definitely no "beta" label anywhere. Remember when even Gmail, arguably the most robust Google product besides search, carried a Beta label for years? This is a different Google.

Beta software seems to be increasingly popular among normal users who like the exclusivity, and Google is riding the wave hard. Glass was available to about 2,000 developers at I/O last year and has been rolling out to about 8,000 consumers--whom Google has somewhat condescendingly dubbed “Explorers.” The company even launched a campaign called #IfIHadGlass where people tweeted creative ideas about how they would use Glass and were then chosen as testers. The product is supposed to be early in its development. So why is it getting such a polished marketing treatment?

As I sat on my industrial stool watching a Glass associate fit my friend’s device to his face, I could pinpoint the unease. The customer experience was supposed to be warm and exciting, but underlying it was a creeping tenor of desperation. This carefully engineered moment had been totally overwrought. All of the trendy branding and deliberate effortlessness--the "store" made of found materials--smacked of a very big bet. After all, the only thing you polish this much is a product you think people aren’t ready for: You don't want to spook your audience. Nearly everything Apple released under Steve Jobs was shrouded in secrecy because nearly everything they built was so ahead of its time that it necessitated serious user-friendliness to prevent alienation. This feels like that. The Glass Store is a front, behind which looms an immense and unstoppable vision.

But what is the vision, exactly? Google claims that “Explorers” are discovering the possibilities of Glass for the first time, but perhaps the program is more like a space for adjustment. Google, after all, is a display advertising company. When it starts selling customized, location-specific ads suggesting tampon offers three days before someone’s period--and presenting those offers on your face--people might freak out. Explorers are emissaries sent to tell the populace there is nothing to fear, and the Glass concierge is there to ease them into their new role. Once you start having panicked dreams about turkeys, you know they’ve got you.

[Image: Flickr user Ted Eytan]

Inside The Data-Driven System That Keeps The Netherlands Above Water

$
0
0

The white skeleton of the Maeslantkering, the massive floating water barrier protecting Rotterdam from high seas, dazzles in the sunshine. This engineering marvel is one of the largest moving structures in the world. Each arm is as large as the Eiffel Tower and weighs twice as much. If a storm surge above three meters is anticipated in the port of Rotterdam and its hinterland of 1.5 million people, the Maeslantkering automatically starts to close, flooding the two arms which move the barrier itself into place and dropping it into the waterway to form a watertight seal. The yearly test closing of the barrier, which seals the entrance to Europe’s busiest port, costs up to 30 million euros ($40 million).

The construction of the Maeslantkering was one of the high points in the Netherlands’ traditional water management strategy--build your way out of the waves--but rising water levels and spiraling costs have prompted the Dutch to try something new: Analytics.

Fifty-five percent of the Netherlands lies below sea level and the country spends 7 billion euros a year on managing its water systems: A complex web of sea gates and levees, canals and locks, drainage ditches and pumping stations, as well as the common or garden sewage and drinking water systems. That cost will rise by 1-2 billion euros by 2020.

“With this growing budget, doing nothing is no option,” says Raymond Feron, program-director of Digital Delta within Rijkswaterstaat, the Dutch national water authority.

Digital Delta is a 12-month research program investigating how to integrate and analyze water data from a wide range of existing and new data sources in order to reduce the cost of future water projects by 20-30%. The project involves IBM, the Rijkswaterstaat, researchers from the University of Delft, and several other partners who filled me in over a lunch of Dutch Tosti at a meeting spot in the shadow of the Maeslantkering.

There are three ways in which leveraging data can cut costs, according to Djeevan Schiferli, a water management executive at IBM. “You can better design your future infrastructure. You get longer use of the existing infrastructure, postponing investing in new infrastructure. The third element is on the maintenance of the infrastructure, not putting 10 kilometers of a levee into maintenance but only 100 meters.” All the big infrastructure investment decisions made by Rijkswaterstaat are aided by models developed by Deltares, a Dutch water research institute. “They are sensitive to the starting conditions,” says Feron. “With better starting conditions, better quality models, up to date and more data sources, the predictions will get more accurate. The decisions about which infrastructure we need where and where we can skip an investment.”

The Dutch water system is already one of the most highly monitored in the world. “We do biological, chemical, and hydrographical information and we also do a lot of measuring from ships on the rivers and canals, “ explains Feron.”We have a combination of current measurements--the water going through rivers, river discharge. We operate the major locks. During high water and low water it's critical not to leak too much water from one system to the other. Wave heights. That's important for predicting storm surges. For wind and waves we cooperate with the meteorological service. We have hydrographic and meteorological sensors on the North Sea and on the shore. We monitor shipping.”

That’s just the Rijkswaterstaat, but most of the Netherland’s levees and pumping systems are managed by one of the 27 local water boards. Water boards were first established in the 13th century by farmers who needed to maintain local water systems. They operate independently of other government bodies and levy their own taxes, which I grudgingly pay myself in Amsterdam. While the “Johnny-come-lately” Rijkswaterstaat, which was only established in 1798, shares data with local water boards and they with each other, it’s often done informally and offline. “So if there is an organizational change,” says Feron, “the system doesn't work anymore. It depends on people and if they know each other.” Neighboring water boards may end up working against each other when they both, for example, start preventative pumping of water out of their systems when rain is predicted.

Digital Delta aims to change all that by establishing a central registry of data sources all available in a standardized data format. The registry will be built by IBM, but will become public property at the end of the project. The first step is connecting existing and new data sources to the central system. These include precipitation measurements, water level and water quality monitors, levee sensors, radar data, and model predictions as well current and maintenance data from sluices, pumping stations, locks, and dams. One business case Digital Delta will address is connecting the data from water level gauges, which measure the water level in rivers and canals, with the gauges from local water boards. They all have different and incompatible interfaces.

For this and other reasons, adding new sensors is usually a hassle. Nick van der Giessen, professor of water management at Delft University, wants to experiment with new types of sensors. “I want to be able to install a sensor, scan a QR code and that's the end of it. Small companies like Alert Solutions have nice sensors but they spend 90% of their time and effort and swearing on getting the rest of the chain set up.” The rest of the chain means getting easy, automated access to the sensor and other data sources.

“Rijkswaterstaat monitors the rivers and they go up and down the rivers twice a year with their ships and they measure depth to see if there needs to be dredging, “ says IBM’s Schiferli, “One small company said 'Listen, we can take in the depth readings of commercial shipping and give you a more accurate, more timely 3-D overview and we think we can do it cheaper.’ When we asked companies like this why are you not implementing this solution they say it costs us one third to two thirds of our budget to find the data, get access to the data, and validate the data.”

Once the data registry is established, it will become easy to connect new data sources, access them in a standard format via an API, and combine them. One of Giessen’s colleagues at the University of Delft, for example, has been using call center data on water complaints to predict sewer flooding. “How do you know whether a flood was due to rainfall or lack of maintenance?” asks Giessen. “If you start with the complaints and add inflow, you can do much more. The complaints come in transcripts. We have someone who reads the transcripts and says 'It's overflow' or one of about five categories. We have 20,000 transcripts but that a subsample of everything we have in Rotterdam. With these 20,000 classifications you should be able to automate it and use the data from the whole of Rotterdam. And why would it not work in Utrecht?”

Once the data is available, it can be used by IBM’s Intelligent Operations for Water (IOW) software to build a systems view of the entire water system based on the data sources available and a semantic model. At a later stage in the project, IBM plans to utilize IOW’s analytics to enable completely new use cases. “If you look at the analytics there's descriptive--12 times a year a drain floods--and predictive analytics using Machine Learning -- what was the combination of data streams which led to the flooding?“ says Schiferli. “The third level is prescriptive: ‘If the drain is clogged and the weather forecast predicts heavy rain, we need to do the following.’ Eventually the data registry built by Digital Delta may contain not only data, but applications and models as well. “If his colleague has a simulation of the sewer system using his rain gauge data, she can decide to have that model as part of the registry.”

IOW has an application development model which includes an SDK third parties can use to build water applications such as leak detection, flood management, or water quality applications. So small companies and researchers can participate by building their own applications on top of the registry. Feron foresees one rather more unusual way in which the data could be used: to inform Dutch citizens about the risks they face. “We have such a high risk level, but there is such a safe feeling," he says. “They are really surprised that a flood risk of once in 100 years means that it can be tomorrow. We need to accept certain unsafety. Accept that once in 100 years, for example, one of these polders will flood and the people who live there know that can happen. One University of Delft project has algorithms on what happens when some river or dyke goes, and you can use that to show people more or less real-time what happens. ‘This part of Delft stays dry. This sector we evacuate.’ Visualizing that will help people to accept it. They decide for themselves what they should do. This open data and visualization and apps will be the technology that will help communicate with the public.”

The rising sea levels caused by climate change means that the flooding risk the Netherlands has faced for hundreds of years will become an issue in other parts of the world. “This country lives below water but this is happening many places in the world, “ concludes Schiferli, “New Orleans, Jakarta, Adelaide, which is dry but also has heavy rain. All the same questions. If the Dutch have got the situation under control, why are they going to the next phase?”

[Image: Flickr user Goldsardine]

NYU Builds Data-Sharing Network For Scientists--But Is It Legal?

$
0
0

You pay for science. Your tax dollars fund the national agencies that finance research. Yet you can't see most results of the science your dollars support--from cancer treatments to robotics--without paying the price. Journals like Science and Nature will charge you $20 or more for access to a few-page-long report. Open Science advocates like NYU developmental psychologist Karen Adolph believe scientific information should be free, like books in a library, and she’s determined to do something about it.

Labs tend to be secretive places, where researchers guard their data from competition, and seldom share full methods openly. Scientists often publish selectively--sharing their successes while hiding null results, which is a common publication bias. Scientific elitism like this is worse than unfair: It's dangerous.

Adolph says science needs to be open, and her project Databrary, a sharing platform announced by NYU this month, may free scientists to discover more powerful findings, faster--if it doesn’t get hamstrung by antiquated privacy laws first.

Here’s Why You Should Care About Open Data

Adolph is bothered by another layer of opacity in science: Researchers don't often share the raw data on which their published papers’ results are based, making them hard to reproduce. Opening up data and methods is what she hopes her new online library will do--allowing researchers to share, browse, tag, critique, and reanalyze video clips across labs.

If you see a doctor who prescribes you a pill--say, Abilify, the #2 top-selling drug last year, a mood-stabilizer--and you want to read about what the chemical does to your brain--not some slick ad-copy written by the company that sells the dope, but the peer-reviewed scientific evidence itself, you're out of luck: That'll cost you $71, plus tax.

Your health is at stake here--not to mention your hunger for information. Shouldn't that science be everyone's right to read, seeing as your tax dollars funded the work? And shouldn't the raw data and procedures the company used to prove the drug's effects be open to the scientific community at large, to reinterpret and replicate? The same patient one psychiatrist calls "cured" another might call "sedated": Why should we take labs at their word?

If the government gets on board with transparent science, data sharing could become the new normal, by law--leveling the playing field for universities, hospitals, and companies alike. Free online journals like Public Library of Science (PLoS) are increasingly attracting the work of top scientists; The Public Access Policy of the National Institutes of Health requires all NIH-funded research since 2008 to be made public a year after publication on PubMed Central, the free online database. But Adolph believes we need much more.

How Government Is Teaming Up With Scientists To Set Data Free

Open-source science has been Adolph's priority since the '90s, when she worked on DataVyu, open-source software for video labeling and data visualization. DataVyu is mostly used by developmental psychologists like Adolph who study how infants learn--but in principle it can be used by anybody analyzing video, for whatever purpose.

"Everything we're doing is open," says Adolph. "Every line of code is on GitHub... All our administrative documents, operating procedures, everything is up there with all its bells and whistles, all it's pimples and blemishes... So we are really moving forward with the intent that: Open Science, open sharing, open source, it's all just open."

When the Obama administration decided they wanted to invest public money in data-sharing, to make scientific research faster and more efficient, they approached Adolph for her expertise in open science. Adolph organized a conference in 2011 of 35 behavioral researchers, computer scientists, and library scientists to come up with a way to share video data, while protecting subject privacy. To raise funding, she invited representatives from every federal agency she knew.

"There's a whole list of worries people have, and many of them got raised at that workshop," Adolph says. "Will I get credited? What if I'm not done using my data?... What if people find things wrong in my data, and I'm sort of outed? But the people at the NIH kept telling us: It's going to happen. So the choice is: Researchers can figure out how to do it, or it will happen through the government. But one way or another, people are going to have to figure out how to share data."

The result of the two-day National Science Foundation workshop was a team headed by Adolph, along with Rick Gilmore, an associate professor at Penn State who studies vision and brain development, and David Millman, NYU's Director of Digital Library Technology Services.

The NSF and NIH awarded Adolph grants for the project: $2,443,500 and $786,677. These federal funds are more transparent than much of the science process, because they're tax dollars: We can see where our money goes--why not what comes out of it?

YouTube For Scientists: The Rawest Data Sharing

"Raw data" may bring to mind a spreadsheet of rows and columns--but that's not the kind Adolph wants. Sure, Databrary can deal with "flat" numbers, but what she's after first is data in its rawest form: videos, labeled with nothing but the age and sex of subjects.

Data in a spreadsheet, she explains, is only useful insofar as you know what the labels mean, and are interested in the same question as the person who built the database. But video shows reality and lets you ask whatever question you want.

Video has been standard in developmental science for a century. From early pioneers like Yale's Arnold Gasell, who tracked babies from womb to walking, and Myrtle McGraw's 1938 video Growth: A Study of Johnny & Jimmy, to MIT's Deb Roy, who filmed the first 90,000 hours of his son's life to analyze how he learned language, studies of kids have begun with movies. Since babies can't talk, Adolph explains, studying them is kind of like studying animals: You have to infer what they're attending to, thinking, or feeling from what they do: what they look at, who they move toward or away from. "Looking time," as a result, is one of the main variables in developmental psych: How long did a baby look at "Display A" versus "Display B," at his mother versus an experimenter, for example. Videos are used to study how children learn walking and coordination (Adolph's lab's topic), as well as language, social attachment, and self-control. In the "marshmallow test," one famous example, videos showed that a kid's ability to withhold eating a treat often predicts academic success later in high school and college.

I'll Name Your Data However I Want To

Databrary's name is intentionally broad, to include not just video, but all kinds of data streams, from physiological measurements like brain scans or blood tests, to spreadsheets or questionnaire data--IQ, personality or mental health check-lists to diagnose psych patients, for example--or even transcripts of talk or text from media. Data-sharing efforts for brain-scans, like OpenfMRI, HumanConnectome.org, and Neuroshare have influenced the design of Databrary.

"What I'm saying is: By opening the data up, and allowing transparency, the field can police itself," Adolph says. "We'll have a better basis for deciding what's good science. So we as the builders, or the developers of this repository, aren't going to decide what's good science. We're just going to open up the science and allow the community to decide where the promising areas of growth really are."

Databrary videos are meant to be categorized democratically--"bottom-up" rather than "top-down." They're defined by users rather than the librarians or video creators.

The only mandatory labels for a video will be the age and sex of people in it, plus links to any papers published on the data. The "meta-data" attached to the video will define what it is "about"--i.e., what information different scientists have dug out of it. If Adolph posts videos of children crawling and walking, tagged for "falls," a language scientist might tag the same video every time the baby speaks, or one interested in social bonding might tag it for the moments when the kid approaches his mom. And pretty soon, a single video will sprout a forest of papers around it, covering a whole range of behavioral research.

Databrary may become a "YouTube for Scientists" of many kinds. Neuroscientists sometime use video to record the positioning of brain-scanning equipment. Doctors use it to record patients with movement disorders, before and after a surgery or drug intervention, or children with developmental problems. Animal researchers use video to record procedures like surgeries on rodent or monkey brains, so that other scientists can see exactly what part of the brain they tracked. Education researchers routinely use video to study classroom lessons, too, finding patterns in teacher and patient behavior. With this new tool and the proper permissions from participants, all this video could become open for critique, reanalysis, and to inspire questions in young scientists.

Data-Sharing: The Times Are A-Changing

Open data is the convention in a few sciences already, because of shared technology and cost. Astronomers, for example, pool data from a small number of powerful, expensive telescopes worldwide. Particle physicists also share data, as well as earth scientists. Genomics has had an open-data policy from the beginning: Whenever a species' genome is sequenced, it's required by law to be shared in GenBank, an open repository.

"[In science], just like in any other industry, cultures can change," Adolph says. "So that's part of what we're trying to do, is to be part of this new wave--changing the culture of behavioral science to make it more open, in the way that [other sciences] have moved. I think there's still plenty of room to compete, even if we share...You may even get more citations by opening up your lab rather than by keeping it closed."

The Clinical Problem: Trading Privacy For Transparency?

Transparency in science is trickiest with medical research. This makes it harder for Databrary to help the very people who have the most to gain from open data: sick people.

Privacy is an issue in all of the videos, since subjects are identifiable by face and voice. Databrary videos won't be public, but shared with a group of authorized researchers who have signed agreements with Databrary, to keep confidential the identities of the people on the videos. People in the videos must give written permission for their videos to be shared. Kids' videos can be shared by caregivers, but medical records are a different story--more strictly regulated by the government.

In building the community of potential Databrary contributors, Adolph and her collaborators contacted around 120 behavioral scientists. Of those, only around 20 declined to participate. These were mostly clinical researchers who study children with developmental disorders like autism, Down syndrome, and cerebral palsy. Government regulations, called HIPAA (Health Information Portability and Accountability Act) particularly restrict sharing hospital records and other forms of private health information: psychiatric diagnoses, sexual histories, or medical illnesses, for example.

"The irony is: If you're a mother, and you have an autistic child, [you're] the most eager to get that data shared and let people figure it out--similar to a cancer cure," says Lisa Steiger, NYU's Community Engagement and Outreach officer for Databrary. "[Mothers of disabled kids] are the most eager to have their data shared and reused and analyzed more deeply. And yet they're the ones who are the least likely--[because] it's the most challenging to have their [HIPPA protected] data shared. We don't know that it's impossible. We just have to figure out how to navigate that."

Adolph thinks technology has made current health care privacy laws outdated. Patients still need to be protected, of course--but current laws obstruct progress needlessly.

"In this digital age, everything's changing," Adolph says. "One of the last things to change has to be people's comfort at having certain kind of private data shared. Institutional review boards [the bodies at universities, colleges, and hospitals that decide if a study can be performed]--they're lagging behind. They're from the days long before YouTube and Instagram."

"We're in a time now where I have to remind my teenage daughter every day: You better be careful what you text and what images you post of yourself, because those are digital files now, out in the wild. People are much more comfortable sharing videos and pictures of themselves. Most basic research is pretty harmless."

There's an irony of a government that requires extreme privacy protections in hospitals, while spying on its own citizens through the NSA. In any case, videos can be authorized to play for other parties.

"Obviously, the restrictions were put in place with good intent," says Dylan Simon, the software developer of Databrary. "These are vulnerable populations, and you don't want people taking advantage of them. But I don't think they were put in place with the current technological landscape, where there are cameras everywhere, in mind. They were put in place so that dangerous people couldn't find [patients'] addresses and go and stalk them and take advantage of them, not so that researchers couldn't do science."

[Image: Flickr user Eric Fischer]


Former BitTorrent Engineer Thinks He Can Fix Your Wi-Fi--For Good

$
0
0

There are some user experience problems that are so glaringly obvious, so ingrained into our daily lives, that we don’t notice them until someone points them out. Connectivity is one of these problems. Yes, we can access the Internet in more places than ever before, but the process of doing so hasn’t changed much since we ditched dial-up modems a decade ago: you turn the Wi-Fi on and off, enter passwords, pair devices, search for a 3G/4G signal, and finally wait for IP addresses to be assigned.

This process sucks--but we've come to accept it as ineluctable. Why don't our devices just choose the best available connection for us and automatically connect to it? Two entrepreneurs--one an engineer who worked on transport mechanisms at Internet2 and BitTorrent, another a VOIP pioneer who founded the first telco to be interoperable with Skype--think they can fix it, but first they’ll need to get consumers and telephone companies to think differently about how we connect. No tall order.

“Routing today is in even worse shape than congestion control was a few years ago,” says Stanislav Shalunov, cofounder and CTO of Open Garden, the company trying to tackle this problem. “I have at home cable, AT&T and Verizon, but any given device must access one or the other or the third, and if one of these connections fails, I have to decide to use a different connection, and the process is pretty painful. If my Verizon phone stops working there is just no way for me to get it to use AT&T's phone’s connection. I can put it on Wi-Fi and use Comcast but not AT&T. This lack of transparency and the need to do things manually is stone age.”

Open Garden’s first product is a piece of software for Mac, Windows, and Android that detects other devices with the software installed in range, connects them via Bluetooth or ad-hoc Wi-Fi networks, and starts routing traffic between their various connections as efficiently as possible.

If anyone understands how inefficiently traffic can move across the web, it’s Shalunov. Before joining Open Garden, he worked at the Internet2 consortium, where he created LEDBAT, a congestion control mechanism that was later used by Apple and BitTorrent, among others, to make transferring large files over the Internet faster in peer-to-peer settings. LEDBAT now carries between 13% and 20% of all traffic moving across the Internet. Shalunov says that much of Open Garden’s vision was inspired by his previous work.

“Many of the ideas in Open Garden are ideas from the peer-to-peer world applied to physical connectivity. In the normal peer-to-peer applications, devices establish a bunch of new connections between each other. These are logical connections. They're TCP or LEDBAT connections. They're not new physical connections. At Open Garden, we take that idea and we use this same set of ideas to establish new physical connections.”

Once it establishes these physical connections, the software then inspects the traffic on each device, recognizes which protocol it's running on, and redirects it through the network best equipped to handle the traffic. Shalunov is understandably coy about how the software figures out how to route this traffic, but the upshot is that even multiple files requested by the same website (say, CSS files from Facebook) might be routed through different network connections before being sent back to the device the request originated from. It does all of this without needing the end user to intervene.

“We remove the whole discovery and pairing process you normally have, so when devices have the app installed, they just interconnect seamlessly and then you can start to share your mobile Internet,” says Micha Benoliel, Open Garden’s cofounder and CEO.

Benoliel wants to take the idea further than just making accessing your own Internet connections easier, however. Open Garden doesn’t just work between your devices. It works when anyone running the company’s applications are in proximity of each other.

“Let's say you're in a space and one of your friends has an access to Wi-Fi you don't. Then you're going to hop onto your friend's device and connect to that Wi-Fi through your friend's device,” says Benoliel. In this way, Open Garden operates more like a mesh network, or as Benoliel puts it, “an abstraction layer” on top of today’s complicated networking and routing setups that just works for consumers who want to be connected, no matter who or where they are.

This idea--that we shouldn’t view the Internet in terms of multiple connections but as one interface that we connect to seamlessly--is wildly subversive in a world where consumers pay through the nose for individual data plans that increasingly include usage caps. So what if their software allows users to get around tethering restrictions and share their bandwidth between devices to avoid hitting caps? Shalunov is openly unapologetic.

“We want to make good connectivity. If some carrier has a broken business model, that's largely unfortunate for them,” he explains. “Good networks actually want to sell you more bytes. You'll be amazed, but they want you to buy more of their product.”

[Image: Flickr user Charles Lam]

Watch This $200 3-D-Printed Robot Crack Your iPhone

$
0
0

You might trust your phone’s four-digit PIN to keep an Apple picker from cracking your precious smartphone, but if they’ve got $200 to blow on a 3-D-printed machine, the Robotic Reconfigurable Button Basher (R2B2) can bust your phone wide open.

The R2B2 isn’t fancy: It cracks codes through sheer brute-force determination, but it works with buttons, touch screens, or pattern-tracing codes. It will punch in a code per second, exhaustively cracking an Android four-digit PIN within 20 hours, but “times for other devices vary depending on lockout policies and related defenses.”

R2B2’s inventors, security researchers Justin Engler and Paul Vines, developed the machine to prove the “nobody’s going to try all 10,000 combinations” argument wrong. They even did it for under $200 using a few servomotors, an Arduino chip, 3-D-printed parts from a desktop Makerbot, and a $5 webcam that tracks whether the code’s been cracked. Its open-source software can be used on Mac or PC and controlled via USB.

Not all phones are as susceptible to the R2B2’s repetitive attacks--iOS, for example, increases the time between PIN attempts after each wrong guess--but Android’s factory settings institute just one 30-second delay after every five wrong tries, meaning the R2B2 can make approximately 35 guesses per minute. This means it can find the right PIN within 19 hours and 24 minutes, according to Forbes’ calculations.

Engler and Vines will release the part blueprints when they debut the R2B2 at next month’s Def Con, but the first demo will take place at the Black Hat USA 2013 security conference in Vegas at the end of the month. Debuting alongside R2B2 will be its sister device, the Capacitive Cartesian Coordinate Bruteforcing Overlay (C3BO). Unlike the pad-tapping R2B2, the C3BO electronically stimulates touchscreens, which can work faster than the R2B2 in some circumstances.

Engler and Vine plan on improving the robot to crack non-digital PIN devices such as ATMs and safes, all in the name of increased security. Their point is that putting just a little more thought into how we secure our devices can help. Thieves might willingly take 20 hours to crack a CEO’s phone for sensitive emails, but even ramping up from a four-digit to a six-digit PIN adds up to 80 days to the R2B2’s cracking time.

Three Reasons Samsung's Developer Conference Is A Good Idea, One It's Not

$
0
0

From October 27th to 29th this year in the halls and corridors of the Westin St. Francis Hotel the air will have a decidedly Samsungy feel, as the company's first Developer Conference unfurls across the Galaxy. Or at least San Francisco. It's a great idea. Ish.

Google, of course, has a developers conference of its own--Google I/O. Apple, famously, has its own developers conference, WWDC, that is one of the few marquee events that Apple engages in...having pulled out of many other industry affairs. Given that the company makes so much cash from farming smartphones and tablets, which basically feed on apps, this is a great idea. It also explains why Apple often uses the event to launch devices, and we can imagine that Samsung will pull off the same sort of stunt (with a smartwatch, perhaps).

But will the Samsung event achieve the same sort of value among its developers as Google's and Apple's does? Let's use Samsung's own promotional text to illustrate the "yes" and "no" of this question:

Engage with industry leaders (Yup!)

One of the great things about getting a huge group of developers together with the executives and experts who put together the devices they program is that a lot of information sharing happens. When you add in discussions with developers who've made a success of developing for Samsung devices, the mix gets even better.

What's on the cards at the Samsung event is a lot of Samsung technical folk revealing best practice for writing apps for their devices, and great examples from developers who've squeezed great performance from the devices and perhaps successfully monetized them. Considering this is Samsung we're talking about here, a firm that makes everything from washing machines to smart TVs, the conference may also be an opportunity for Samsung's "industry leaders" to get developers enthused about writing code for devices other than phones and tablets.

Equally important is that the same "industry leaders" will get perhaps their first exposure to Samsung's developer community en masse. Pet peeves will probably be aired, important questions asked and praise given. The hope is that information going this way will lead to better Samsung devices in the future.

This definitely works for Apple, and indeed much of the more interesting news that comes out of WWDC isn't from its splashy headline-grabbing keynote speeches (where industry figures definitely inspire the crowd) but out of the developer sessions and meetings through the week.

Collaborate with fellow developers (Good)

It goes without saying, though of course Samsung had to, that collaboration and communication among fellow developers is a great thing. Samsung's got its own particular flavor of Android and its own tools. And while you can of course develop your own tricks for working with both Google's and Samsung's idiosyncrasies, it's often better to chat about these things with people exposed to the same problems. Their solutions may be better than yours.

Furthermore, putting a lot of developers in one place could create a sense of camaraderie, healthy competition and perhaps even prompt some teams to form new collaborations or even businesses. Heaven help us if this may not actually improve the quality of some of the million or so apps on the Google Play store.

Learn about Samsung tools and SDKs (Good)

Some of Samsung's systems are designed to work exclusively with one another, such as its Galaxy phones and its smart TV-sporting televisions. Learning in detail about Samsung's tools for coding on platforms such as these may result in better apps for the consumer, and it may also give developers a glimpse into Samsung's future plans for device integration. Similarly for dedicated Samsung hardware like the S Pen, learning from Samsung's own teams about the ins and outs of the systems has to be a positive thing.

Samsung's Tools and SDKs (Potentially Terrible)

Samsung, of course, doesn't sell consumers its smartphones and tablets with unadorned Android installed on them, and instead layers the phones beneath its own interpretation of Google's operating system and loads them up with dedicated Samsung apps that only work with Samsung hardware, utilizing Samsung-only APIs. You know that fragmentation issue that keeps being thrown around about Android? Yup, the Samsung developer event may actually be at risk of exacerbating it. Imagine if Microsoft called a conference for folks to only write websites that worked with the non-standard systems embedded in Internet Explorer...the web as a whole would suffer.

Samsung already wields a disproportionate sway over the Android world, and the conference could be considered a way of increasing this influence. That's not necessarily in the best interests of the Android dev community as a whole, nor the consumers who are ultimately its customers. It's no coincidence that Google has recently begun to sell a "pure" Google Edition of Samsung's flagship Galaxy S4 phone...something that developers attending Samsung's event may want to remember.

[Image: Flickr user Samsung USA]

Google’s Chromecast Shows The World Isn’t Ready For Truly Smart TV

$
0
0

Today Google unveiled the Chromecast, a USB stick-sized HDMI dongle that runs a modified version of Chrome OS and plugs into your TV, allowing you to mirror content from your computer, tablet, or smartphone right to the big screen in your living room. As Google describes the device on its Chrome blog:

To help make it easy to bring your favorite online entertainment to the biggest screen in your house--the TV--we’re introducing Chromecast. Chromecast is a small and affordable ($35) device that you simply plug in to your high-definition (HD) TV and it allows you to use your phone, tablet or laptop to "cast" online content to your TV screen. It works with Netflix, YouTube, Google Play Movies & TV, and Google Play Music, with more apps like Pandora coming soon. With Chromecast, we wanted to create an easy solution that works for everyone, for every TV in the house.

It’s easy to compare the Chromecast to the Apple TV, because Apple’s device allows you to stream content (called “AirPlay Mirroring”) from any Mac, iPhone, iPod touch, or iPad right to any television your Apple TV is plugged into. Arguably, the Chromecast looks nicer because it is much smaller (one wouldn’t even see it if your TV’s HDMI ports are hidden in the back, as most are), but the Apple TV still offers more features since it’s got an actual UI-based OS and allows you to shop the iTunes Store with no other device needed.

However, what’s really interesting about the Chromecast is not that Google is again stepping into the ring with Apple, it’s that it seems that now Google is as aware as Apple has always been that the mythical smart television is still years off. That’s because, as I’ve written about before, no one can quite define what a smart television should do--even tech giants Google and Apple. You can tell by the reductiveness of the device that this thing was built on a hypothesis--an untried one at that.

But as Apple has always taken a very, very, slow-and-steady approach to smart televisions, Google had previously jumped in head first without clearly defining and discovering what a real smart television could offer that our other devices already couldn’t--and those “Google TVs” haven’t reached any kind of critical adoption.

The Chromecast, like the Apple TV before it, signals that Google is now aware users are currently content with the stuff we can get on our tablets and smartphones--and that sometimes they want to throw what’s on their small screens onto the big screen in their living room. Beyond that, Google, just like Apple, is probably working hard at figuring out what a real “smart television” should do, look like, and offer that smaller, cheaper media streamers like the Apple TV and Chromecast, paired with content on our devices, can’t already give us.


Previous Updates


As WebTV Gets Deprecated, A Prayer For Smart TV

July 18, 2013

What on earth makes a “smart” TV? It’s worth asking that question this week, on the heels of the deprecation of WebTV. Known as “MSN TV” after Microsoft bought it in 1997, the platform--which basically allowed you to surf the web on your television--will be shutting down come September 30th. Apparently, a browser does not make a TV “smart,” or at least, not smart enough.

So what does? Ask someone who isn’t a techie to define “smart TV” and they’ll probably say something like, “Oh, it’s like a TV and an iPhone. You can watch shows and surf the web at the same time... Right?” But ask someone who is in the tech industry what a smart TV is and they’ll probably give you the exact same answer (although they may add, “It runs apps too”). Rarely are Luddites and turbo-nerds on the same page about this sort of thing, but in the realm of television, the future is so undefined that it’s anybody’s guess.

And that’s why the current crop of “smart televisions” haven’t caught on yet--and perhaps also why Apple, the company most believe will be able to take the product mainstream, has yet to get into the market. A product needs to be clearly defined before it can be engineered--and marketed--effectively.

But what is a “smart TV” really? Is it a TV with a media streaming box like an Apple TV or a Roku built in? Is it a TV with an app store? Is it the central control room of your house that lets you video conference with people and control the lighting and temperature and oven in your kitchen? Is it all of these things? Or maybe the browser is the key--and the problem is a dearth of new, usable interfaces for it?

The causes of WebTV’s death are myriad, as Brad Hill, WebTV’s first official evangelist, writes for Engadget:

Assessing the demise of WebTV is probably unnecessary -- every proposed reason for Microsoft's decision has some truth. Computers have become household appliances. (Though still not easy or desirable for many people.) The long-sought internet / TV convergence is happening in new ways, most of them specialized to deliver TV-like content (not email). Mobile devices -- that's the real hammer to WebTV, I think. When the iPad was introduced, and was voraciously adopted by seniors, the tablet paradigm provided a new on-ramp to an internet experience. Touching an app icon is a vertical action, not unlike changing channels on a TV.

Brad Hill’s list of causes of WebTV’s death should be a cautionary tale to Samsung, Panasonic, Sony, and others who market current “smart” televisions. Many of these are little more than standard TVs with slick UIs, a web browser, and a very limited app store.

But as the death of WebTV shows, in order for the smart television to invade the living room, its sum must be better than the whole of the features our other devices can give us. And right now the Internet is much easier to browse on a tablet or laptop than on a smart TV. There are more apps for the dying BlackBerry platform than even the best smart television. Even video content offerings--something a smart TV should excel at--are just as good on tablets via all the streaming apps and video stores available to consumers. And don’t even get me started on remotes. No television can be considered smart if the remote has more than five buttons you need to press to easily navigate all of its offerings.

WebTV died because our other devices allowed us to access the same features more easily and comfortably. This should be a warning to all current “smart TV” makers. Your product won’t reach the critical tipping point of mass adoption until the smart television allows us to take advantage of the features our other devices offer, in an easier to use and more attractive package. That includes content accessibility, killer UIs, remote interfaces, and unique features that can only be done through a television. And for that to happen, what a “smart television” is and does needs to be clearly defined.


Could Apple’s “Magic Wand” Be The Next Universal Remote?

July 10, 2013

Many of us probably take for granted that we can pick up an iPhone, iPad, or MacBook and begin using it right away. But for millions of people around the globe computers and other technology have always presented use limitations due to personal handicaps such as those with vision and hearing issues, or physical and learning disabilities. As computers and, indeed, now smartphones, move from luxuries to necessities, those that have conditions that don’t allow them to operate the devices as easily as others can find themselves at a disadvantage.

Thankfully Apple, the number one consumer technology company in the world, has a deep history of providing cutting-edge assistive technology features built into its hardware and software.

But assistive technology is a lot easier to enable once and forget about on personal devices like laptops and smartphones, which typically only have one user. After all, a person who is hard of sight can simply set an iPhone (or have someone set it for them) to the desired accessibility settings once and get on with using it. But communal devices like televisions often have multiple users, and each one might have a different assistive technology need, which means accessibility settings may have to be changed multiple times a day depending on who is using the television--and that may be hard for a user to do depending on their situation.

That’s where Apple’s patent for a “magic wand” remote control come in. From the patent filing:

In response to detecting a thumbprint or fingerprint, wand or the electronic device may compare the detected print with a library of known prints to authenticate or log-in the user associated with the print.

In response to identifying the user, the electronic device may load content specific to the identified user (e.g., a user profile, or access to the user’s recordings), or provide the user with access to restricted content (e.g., content restricted by parental control options).

As I’ve written about in the past, one of the biggest problems with smart TVs is that many “smart” televisions on the market still cling to an outdate, 20th century method of input: the 60+ button remote controls. The remote control for television is an area ripe for innovation--and needs to be revolutionized if any TV can truly be called “smart.” The immediate advantages of a “magic wand” remote described in the patent are myriad, some obvious. As Christian Zibreg writes for iDownloadBlog:

If Apple could authenticate users who simply hold a magic wand or an iPhone 5S (rumored to integrate a fingerprint sensor underneath the Home button) in their hand, the solution could make parental and media permission controls effortless and secure while allowing for multi-user scenarios.

But what perhaps is not so obvious a use of this “magic wand” remote is that it would make accessibility on communal television so much easier for those that need it.

Such a device would be able to instantly recognize the accessibility settings any user needed simply by touch. A person hard of sight could pick up the remote and immediately see the text size and contrast increased in their on-screen channel guide. A person hard of hearing could pick up the remote and see subtitles immediately activated. Even a person with motor control difficulties, such as those who have lost dexterity in their fingers, could pick up the remote and the smart TV would know to increase the size of gesture zones when the wand is waved. In this situation, a user could swing the remote left or right to move forward or backwards through the channels. The wide swing zones automatically activated based on this user’s need would free them from having to make specific, narrow-area presses or taps a traditional remote control, or even a touch screen remote (like an iPhone), requires.

Of course, the “magic wand” patent is just that--a patent. It doesn’t mean it will ever be an actual product. But if it comes to fruition, it would be a kick in the pants the traditional remote control needs and--more importantly--allow for much easier accessibility option activations that communal devices like televisions desperately require.

[Developers interested in making their apps accessible for all should check out Apple’s Accessibility in iOS guidelines.]


The TV Channel Guide Is Broken. Can Netflix Max Fix It?

July 8, 2013

As a child of the early '80s, it wasn’t too hard navigating what to watch on TV. We had five channels and you turned the dial to switch between them. By the end of the '80s we had cable, with a whopping 30 channels and you keyed in digits on the cable box’s numeric keypad to flip through channels to see what was on. By the mid-'90s the first on-screen channel guides appeared. This was handy because it let me see what was playing on my 90+ channels; all the programs were displayed on a linear grid. The early 2000’s brought TiVo and the first guides you could enter search queries into. Amazing. And since then, well, things haven’t changed much.

And for smart TV’s that’s going to be a huge problem.

Because in 10 to 15 years live TV and scheduled programming will exist for two things only: news and sports. Everything else will be on-demand. The new episode of the latest hit sitcom will no longer “air” every Thursday at 8 p.m. Instead it will be made available at a certain time and viewers can then choose whenever to watch it.

Not only will this on-demand programing for new episodes mean a traditional linear program guide is no longer needed, but when you combine all the on-demand currently running TV seasons (that is, “new releases”) with all the other on-demand content smart TVs will offer (100 years' worth of movies and TV shows) trying to navigate what’s available to watch via a traditional programming guide, or even more modern UI like you find on the Apple TV or Roku, could quickly get pointless. There’s just too much content to browse. You could flip through an alphabetized list of television shows or images of movie posters for weeks and not even skim 1% of all the content that will be available.

So how will content be found and searched on future smart TVs? It has to be in a better way than is done now because, just as search is the most important function for user interaction on the web, discovery will become the most critical aspect of user interaction on a smart TV.

That’s where I think Netflix has an interesting thing going for it with Netflix Max. Currently only on the PS3, but rolling out to other Netflix platforms soon, Netflix Max is a new interactive discovery tool in the guise of mini-games viewers play that helps them find what to watch next based on their answers to the game and also their Netflix analytics history, such as past viewing habits and ratings given to content watched. As Yahoo’s Jason Gilbert explains:

When you click in to play Max, you’ll be served a random game which will terminate in a recommendation from Netflix’s famous learning software. At the E3 Gaming conference in Los Angeles earlier this month, I got a chance to play around with Max for about 30 minutes, sampling three of the initial games that will ship with Max. There was Mood Ring, asks you which celebrity, or genre, or oddly specific Netflix category you prefer, generally offering you two disparate choices to find out what you are in the mood for; the Rating Game, which lets you rate a number of movies between one and five stars, and then spits out a title it thinks you will like based on those ratings; and an option that was like the "I'm Feeling Lucky" button in Google, which auto-plays a title based on your rating and viewing history.

For my taste, Netflix Max is a little annoying. I like the discovery aspect, but the games and Max’s voice are a bit too cutesie-wootsie. However, Netflix is to be commended because it shows that the company knows discovery will be the future of smart TVs and that they are aware that current programming guides--and even their own tiled UIs--are coming to the end of their useful shelf lives. Are mini-games the future of content discovery on smart TVs? Probably not. But creative and easy ways to put new content in front of users definitely are.


How Smart Remotes Could Keep TVs Dumb

June 24, 2013

Ask people what device they use most often with their TV, and the answer will likely be “my remote control.” Indeed, remotes have been synonymous with home televisions since the 1980s. However, they also seem to be the one device that could be holding back engineers tasked with creating the television of the 21st century from finding the best ways to interact with TVs of the future.

Engineers and design experts working on creating the first true smart TV need to get the notion of the remote control out of their mind. The remote control is a multi-buttoned monster of a relic that has had its day. It’s clunky, confusing, and 90% of the buttons on it are never pressed.

The remote control was good for its time, but trying to build a “remote control of the future” based on conceptions of a 30-year-old device won’t lead anywhere. Think of it this way: In late 2006 when the rumors were pretty strong that Apple was set to unveil an iPhone, many people thought the device might resemble an iPod but also have the functionality built into the scroll wheel to make phone calls. Pundits shouted that the rotary would once again rise up and take back the crown from the T9 pad. Other people imagined an advance Palm Pilot type of device with a stylus.

But neither of those things happened. What happened was Apple started from scratch--as if they had never seen a phone before--and invented their own. And it ended up changing the computing world.

That’s what the handheld device (indeed, if there even is one) is going to have to be like in order to call a smart TV “smart.”

Writing for 9to5Mac, Dan DeSilva recently praised Logitech for its newest Harmony Ultimate Hub “appcessory” that turns any smartphone into an ultimate remote:

“For years, harmony has been one of the most respected brands in remote technology. It seems like the Ultimate Hub is a move in the right direction making this technology affordable for everyone. The Ultimate Hub will be available in the U.S. and Europe in August 2013.”

While he’s right that Logitech makes nice products and that it’s always a good thing when technology becomes cheaper because it speeds adoption, let’s stop praising companies for merely adapting old technology to fit slightly new standards, because a total rethinking is needed for the way a user will interact with a smart TV--and it’s not the remote in any traditional sense.

So what’s the answer? Voice seems obvious. But as Tom Morgan writes for ExpertReviews:

“In theory, voice and motion control are ideal ways to interact with technology, but not in their current forms. Currently, voice-controlled TVs have a pre-programmed list of commands that must be uttered exactly in order to register a match. If the company hasn’t programmed an alternative phrasing, you’re limited to a single statement to perform simple actions that takes a mere button press on a traditional remote control. The limited degree of recognition accuracy also means that unless you speak with a BBC-trained English accent, there’s a good chance your command won’t get recognized even if you get the wording correct.”

Could voice control be the “remote” of the 21st century? Sure. But as Morgan points out, the tech isn’t there . . . yet (but if I had to place my bets, Google will get there before Apple).

So what about gestures? My problem with gestures is that you run into the gorilla-arm syndrome. “Gorilla-arm syndrome” describes why touch screens don’t work well on vertical interfaces (like an iMac). Though the tech might be there, human anatomy still overpowers technological innovation. The fact is, we get tired holding our arms out in front of us (especially while sitting) and waving them around. It’s why we use our iPads in our laps and our iPhones held in our hands and don’t hang them on a wall like a painting.

But let’s say we could get around gorilla-arm syndrome. Current gesture tech still has many flaws to overcome. As David Katzmaier writes in his review of Samsung’s Smart Interaction control box for televisions:

“The problem was, despite excellent lighting, my attempts to activate gesture control were often ignored and I ended up waving foolishly at the TV. When it did work, navigation was inexact and frustrating--think of a coarse version of a Wii-mote--and after a minute or so of it, my arm became tired. I guess that means gesture control is a good workout.

My fist-to-click didn't register as often as it should have and I ended up flapping my hand open and closed repeatedly in an attempt to "click" an item on the screen. At this point, I seriously considered using my fist to do something else to the TV screen.”

So, what’s the answer to the best way to interface with the smart TV of the future? I don’t know, which is why I’m tracking this story. If you’ve seen or are working on something that might revolutionize the way we interact with our TVs in the future, please tweet me @michaelgrothaus. Because the world’s first true smart TV could have all the content deals it wants and have the slickest UI ever designed, but if it doesn’t have a novel, intuitive, and easy way to navigate it, it could very well be more of a pain to use than today’s TVs with their average of 60-plus buttons per remote.


Apple TV Gets Smarter (By Borrowing Popular iOS Apps)

June 20, 2013

Apple doesn’t like rushing products out the door before they’re ready. However, that doesn’t mean the company is resting on its laurels. Indeed, today the company quietly rolled out the Apple TV 5.3 software update that brings more features to Apple’s set top box.

Significantly, today’s update shows that Apple thinks the road to a true smart television might be paved with features borrowed from iOS--most notably, some of its most popular apps. Now when Apple TV owners turn on their TV, they’ll be presented with new “channels” that are essentially ports of the iOS apps WatchESPN and HBO GO. Depending on what country the user is in, they may also see new channels from Sky News, Crunchyroll, and Qello.

Is third-party content important on a smart television? Of course. But not just any third-party content. To suck users into a world where smart TVs dominate, you need to lay a trail of bread crumbs made of the best content out there, something Apple’s Eddy Cue seems to recognize; he said this in a press release announcing the new channels:

“HBO GO and WatchESPN are some of the most popular iOS apps and are sure to be huge hits on Apple TV. We continue to offer Apple TV users great new programming options, combined with access to all of the incredible content they can purchase from the iTunes Store.”

However, the thing about these new Apple TV channels is that all but one (Sky News) requires an additional subscription to access (or you must already be a paying subscriber to that channel through your home cable plan). For people who want to truly cut the cord, it doesn’t seem to make much financial sense to get rid of the $50 a month traditional cable plan that offers hundreds of (okay, mostly unwatched) channels if every à la carte channel on a smart TV is going to cost between $4.99 and $11.99 a month.

But as Wilson Rothman writes for NBC News, that doesn’t matter--for now:

“Regardless of the limitations, the news is welcome, not just to "Game of Thrones" fans eager to relive the crushing emotional blows of the Red Wedding, but to anybody wondering about the future of Apple TV. The more content deals Apple can ink up, the better the prospects for that elusive "iTV." If Apple can't do it up big--and that means getting contracts from most or all of Hollywood's biggest content stores--it will fall short. HBO is certainly a must-have these days, at least for premium-content bragging rights.”

Still, the day a truly smart TV takes living rooms by storm, I don’t see it being one where I need to spend $60 a month to get access to 10 or fewer channels. Content is king, but for the most part, we live in a 99-cent economy as our app and song downloads clearly show, which means that, for now, the Apple TV needs to improve its learning curve before it can be called “smart.”

Why We’re Tracking The Evolution of The Smart TV

My ideal version of a perfect “smart TV” is this beautiful millimeter-thin pane of crystal clear glass that is invisible until it’s turned on. And once it is, it has access to every film, television show, and sporting event ever recorded--all through the cloud. It’s got apps and content galore. Further, its a two-way communication screen that allows me to talk to any of my friends and colleagues, no matter what device they are behind at the time. This perfect smart TV lets me navigate it by voice and hand gestures in the air. It’s my home assistant that can access any of my computer files--from emails to pictures to video games--from any device I own. And because this perfect smart TV contains every kind of media I could ever want access to, it has only a single cable that plugs it into an outlet. No other ports are on it because they’re no longer needed. External Blu-ray players, video game counsels, and DVRs are so early-21st-century.

But all this is just a fantasy in my head, of course. A true “smart television” doesn’t exist yet--no matter what the marketing material for existing offerings may say. Apple’s kinda sorta doing it with its Apple TV; Google did their version with the Nexus Q, which quickly went nowhere; and companies like Roku, Microsoft, and Sony think they’re on their way, too.

But no one’s there yet, because no firm definition exists of what makes a smart TV, well, “smart.” Is my vision of the ideal TV “smart?” Perhaps. Then again, I’m sure I’m leaving a lot out. And that’s what this tracker is for. Here, we’ll look at the latest advances in television OS’s, cloud services, and UI’s suited to the living room.

Don’t be mistaken: Smart TVs are coming. It’s just that we may have to go through many equivalents of the Palm Pilot until we reach something as refined as the iPhone 5.

If you’re interested in the evolution of the smart TV, be sure to follow this tracker. Here, we’ll explore the latest hardware and software advances that will one day get the television of the 21st century right. And if you’re a developer involved in trying to get us there, get in touch with the author @michaelgrothaus to let him know what you’re up to.

[Image: Flickr user Andy Price]

Wow! Google’s Chromecast Is Amazingly Hackable

$
0
0

Today, Google announced their new Chromecast smart TV device and released a developer API alongside it. This isn’t new, as most set-top boxes have APIs for developers. What’s exciting about Chromecast is that unlike all of its competitors, apps on the device itself run in a local version of the Chrome browser instead of as compiled applications. This means that--if Google opens the device up to everyone (right now there’s a whitelist)--millions of web developers are going to be able to build TV experiences for their apps from day one using the same technologies that already power their sites.

Google’s documentation is so far geared towards developers who want to stream media from their sites or apps, but because the device uses a stock Chrome browser and allows you to load any HTML page (with a few restrictions), there’s no reason you couldn’t build any kind of interactive TV experience--say, a slideshow for a news article--into a desktop web page. That’s something that Apple’s AirPlay API, which allows developers to tag embedded media on their site for AirPlay on mobile Safari, can’t do.

Most importantly, the API will be easy to use for anyone who’s ever built a Chrome extension, because its discovery and messaging systems are similar to the chrome.pageAction and chrome.runTime APIs. To detect when a Chromecast device is in range, you simply add a listener for a discovery event coming from the Chromecast extension and add a button to enable Chromecast to your page. When a user clicks on it, you push your second-screen application to the Chromecast, which sets up a messaging service between the mobile device and the Chromecast that looks very similar to messaging between background pages and content scripts for normal Chrome extensions. Google has built-in message handling for common media functions like play, pause, and volume control, but gives developers the ability to add their own custom message types as well.

The Chromecast is in some ways similar to the Companion Web experience that Microsoft announced last week, although it looks like Google’s device won’t allow you to pull up the web alongside live television yet, and the API isn’t built to allow multiple devices to control the screen at the same time (although there are some ways an enterprising developer could make that happen). But what Chromecast lacks in split-screen experience, it’s going to make up for in the ease with which developers will be able to build on it.

We talked to Polar developer Chris Butler last week about the challenges of building for Microsoft’s Companion Web, which isn’t a platform at all, more an interesting idea about web development. Because it relies on the web developer to do all of the work of passing messages between devices, Butler found it challenging to build a multi-screen app that works with Microsoft’s new XBox One.

Google’s Chromecast API treats the TV screen as a special instance of the main app and gives developers a persistent, likely WebSockets-based mechanism with which to pass messages back and forth. This messaging system effectively handles much of the event queuing work Butler had to do by hand. Taking care of these complexities makes developers happy, because they can focus on building instead of debugging. When developers are happy, they build more apps. And platforms that have more (high-quality) apps typically win.

[Ed. note--This is an early look at the API without having access to the device, so if there’s something I missed or didn’t quite understand, let me know on Twitter. I’d also love to hear what you think would be fun to build on Chromecast.]

Viewing all 36575 articles
Browse latest View live




Latest Images