Quantcast
Channel: Co.Labs
Viewing all 36575 articles
Browse latest View live

5 Ways The 49ers' Teched-Out Levi's Stadium Is Changing The Game For Fans

$
0
0

On Sunday, the San Francisco 49ers debuted its $1.2 billion stadium to the regular-season crowd. Along with a bridge made of solar panels that helps make the structure energy-neutral--even though it's adorned with two 48-foot high LED screens--the stadium is outfitted with robust 4G and Wi-Fi, and a museum that includes a STEM-friendly classroom full of interactive desks. It's topped with an app that not only shows where to find parking but lets you order food on-demand directly to your seat.

This technology won't fix the NFL's bigger problems right now (nor can it take the sting out of last night's 28-20 loss to the Chicago Bears), but it might transform the way other teams interact with fans during sporting events. Other stadiums are already starting to replicate these features. The Sacramento Kings have an app that includes friend connections and shows multi-angle game replays being readied for the new Entertainment and Sports Center to debut in 2016, according to Kings chief marketing officer, Ben Gumpert.

With 25 engineers--comprised of some veterans from companies like Facebook and Yahoo--working on the software and data platforms for Levi's Stadium, the challenge was incorporating technology without taking over the game experience.

Here's a look at what they did this weekend, and what else they have planned.

In-Seat App

Want a beer and pizza without leaving your seat? You can with the Levi's Stadium App, which had 80,000 downloads even before the stadium opened.

Integrating ticketing and in-seat food delivery to 68,500 seats into the app, which took two-and-a-half years to create, proved a bigger challenge than expected--though an expected average wait time of nine minutes isn't terrible.

Getting fans to choose stadium seats instead of their couches was one of the first things Levi's CEO and president Chip Bergh asked 49ers owner Jed York to do. Having been involved with the creation of several stadiums--including Gillette Stadium--Bergh says he knew the niners had to come up with something big.

"The NFL, and many sports teams, have really wrestled with how do you get people to actually show up for games," Bergh says. "On the East Coast and on a freezing cold day, there were a lot of empty seats because it was easier to stay at home and watch it on TV."

The designers and engineers were "very focused on how they could make the fan experience inside the stadium more awesome than staying at home and watching it on TV," he says. You can check how long the line is at the hot dog stand and how long the line is to go to the bathroom, "which is really critical by about the third quarter."

After working out bugs in time for the first regular-season game, 49ers chief operating officer Al Guido says the team was careful to not overpower the live-action game, but to focus the app on bringing the conveniences of watching at home.

"We've already linked tickets and parking for almost 70% of our ticket base right now, which is unheard of in this day and age from an application perspective," Guido says.

"A lot has been talked about in the NFL stadiums around the fact that the at-home experience from the television perspective is so much greater than the one in stadium," he says. "We really just wanted to enhance that play on the field and your convenience of getting to the game or ordering food and beverage."

The app's game center will also allow users to see real-time stats and scores of other games without switching in and out of external apps like Yahoo sports or NFL.com. "When you are at home, you get whatever CBS or NBC happens to show you," Bergh says. "But here, you are going to be able to command your own replay and your own camera angle."

Guido says more on-demand features are in the works for the next one to two years such as in-seat retail delivery and 49er art buying. "We have over 200 original pieces of art in the building and over 500 photos. We want to allow you to scroll your phone over pieces of art, or pieces within the museum, and have that pop up as information."

The New World Of Sports Data

Waze integration data from the app will reveal parking patterns and what roads fans take to get to the stadium. From the food perspective, it will collect data on what food and beverage you're consuming and at what times throughout the game, which Guido says went into effect at a recent San Jose Earthquakes soccer game.

"We sold a ton of curry on our first game," he says. "If you go around to any other stadium, you'd find the food and beverage providers say, 'No way, you're going to sell a bunch of hot dogs and hamburgers.' When we talk to our food and beverage provider, we can give him all that data in real time and understand who's ordering it, from where, and how much."

The data will also help in suggesting the best points of entry, exits, traffic alerts, and ticket transfers made in the app. "We know who you're transferring them [tickets] to so that we can send you information on parking and transportation. "It's really endless on both fronts, the upfront to the customer, and then the back end to the team," he says.

Levi's Stadium is also poised to set a new openness to API when it comes to fan data, which, Guido says, is necessary for other teams' stadiums to compete. "I think teams need to be committed to having platforms that are super intuitive--and not just platforms that are intuitive, but robust data warehouses."

A feature in the app called "Faithful 49" will allow users to "gain yards" by interacting with sponsors like Esurance, incentivizing fans to get more interactive with the sponsors. It has lofty terms and conditions which will be interesting to see how other teams reform with privacy concerns.

"That's the one piece that was certainly a bigger challenge than I think maybe even the tech guys anticipated, because the sports world is so very different than the tech world with open API. The sports world is kind of closed off. Getting those partners to work with us wasn't hard, just figuring out where we wanted to go took some time," Guido says.

Wi-Fi That Works

One of the biggest problems inside stadiums is that mobile reception is spotty, leaving people feeling disconnected for hours on end. Levi's stadium was designed with a different infrastructure than the standard Wi-Fi shooting cone; it used a new model called MicroCell system.

For the first time in professional sports Wi-Fi boxes have been put under every 100 seats, offering five times more capacity than the average in-stadium network. Thanks to 400 miles of data cables, a fan's seat will never be more than 10 feet from a direct Wi-Fi signal, says Guido (and hundreds of geeky fans raving on Twitter).

With 40 gigabits within the infrastructure--compared to the NFL mandate of 10 gigabits--the stadium will likely still be competitive when it comes time to host Super Bowl 50 there in 2016.

"We had to build the platform so that it wasn't just good for 2014," says Guido. The Cowboys stadium already got retrofitted for additional Wi-Fi after only four years of being built and hailed as the most technologically advanced stadium. "In our mind it was good for 2044. We felt like if the infrastructure was there, the mobile platforms would just continue to get better and better, and your opportunities are endless."

"With Wi-Fi in the stadium, you allow people to use their devices they already have," NFL Commissioner Roger Goodell says. "We believe that's the best experience and we hope this will become a model for future stadiums."

The MicroCell approach is reportedly being replicated in new stadiums for the Atlanta Falcons and in Minnesota. The Kings' chief marketing officer, Ben Gumpert says, "Connectivity and bandwidth will be crucial components of the technology in the ESC."

No stranger to sports tech, The Kings became the first team to accept bitcoin as payment, incorporated Google Glass during a game broadcast, and used Oculus Rift to augment to fans the new arena. Gumpert says they, too are working on a robust Wi-Fi structure.

A Field Of Greentech

While solar panels aren't any thing new in stadiums, the niners' solar panels are in the form of functional bridges. It's the first of its kind in the NFL, beating the Philadelphia Eagles' 10-year-old signature "Go Green" campaign in Lincoln Financial Field. It's also perhaps the first power-saving design that doesn't look ridiculous--it even shades the poor souls with rooftop VIP seats.



From the solar terrace, fans can also see one of the two large LED video boards. The 48-foot high screens, made by Daktronics, adorn the end zones. One is 200 feet wide, the other 148 feed wide, each with a 13HD pixel layout that can be used as one massive screen or sectioned into smaller windows.

And all that light is working from a carbon-neutral energy grid. The first professional sports team to achieve net zero energy performance in California, it is also the first professional football stadium with LEED Gold certification.

Memorabilia Meets The Museum

Instead of a traditional team gallery, the 49ers' museum includes interactive content for every player that's ever been on the team in a cross-searchable database. The All-time Roster--as it's called--contains 12,000 photos and 1,295 players' data.

If you knew someone who played for the team or played with your dad in college, you can swipe your way to six degrees of separation and find multiple points of information and photos of them. "What a lot of people we found in testing was, they were curious to know who else went to Illinois that played for us," 49ers museum director Jesse Lovejoy says. "We created that search-for function."

49er alum and current VP of football affairs, Keena Turner, says the creation of such data files are a unique tool to preserving history of former players like him. "It's not doing tech for tech's sake, but using imagination to learn from our perspective," he says. "We like that blend of high tech and low tech as an opportunity to recognize every player that ever played for the team. For me, it's really personal."

Entering the museum's Heritage Gallery, touchscreens line up with artifacts at different points of the physical timeline showing additional stories from history with audible and visual stories. Keyed-in green-screen interviews with coaches are layered with old clips of plays and animated sequences.

Cortina Productions also created interactive elements for the museum's internal STEM classroom, where teachers can control students' interactive desks during a learning session. "We felt like we had a platform to make STEM to help teach kids science, technology, engineering, and math components of sustainability or technology like what makes a spiral go further," Lovejoy says.

Like much of the other tech used in the stadium, the museum's challenge was using it in a way that wasn't overbearing, says Lovejoy. "I think a lot of times, you can get buried in technology. It was our goal and our job to use it in interesting and unique ways to share depth of content that you can't do on a wall. You can't do it in 30 words."

Elsewhere in the stadium, there are interactive games, lessons from players, Sony touchscreens, and broadcaster simulations that give fans a chance to be in the action--think of a Dave & Busters dedicated to your favorite team. The aim is to boost stadium foot traffic year-round.

Though Levi's Stadium is ahead of the pack now, that could change come 2016 as stadiums engage in a neck-and-neck race to see who can attract more fans, feed them faster, and keep them in the stadium longer. What does it take to win? The best software.

Or is it hardware? "We like to say that we built the stadium as a software-driven stadium, not a hardware-driven stadium," says Guido. "The funny thing about that is, it takes hardware to build that infrastructure."


Hey @Seevl, Be My Twitter DJ

$
0
0

Looking for a music recommendation, but not really sure where to turn? Shoot off a message to @Seevl, the Twitter bot DJ.

As a proof of concept hack, the Twitter handle @Seevl is acting as a DJ-bot serving up music to anyone who @sks. The bot makes recommendations based on specific artists, genres, artists from specific record labels, or similar artists.

The trigger phrase--"Hey @Seevl, play something like Sting"--for example, returned solo work from Eric Clapton. Any first word greeting like "Yo," "Hey," or "Hi," will work as part of the trigger, just as long as you're polite about it.

Here are the specific examples available to use.

  • Hey @seevl, play something like *artist name*
  • Hey @seevl, play something from *record label name*
  • Hey @seevl, play some *genre name*
  • Hey @seevl, play *artist name*

The Twitter bot was created by Alexandre Passant, founder of music discovery platform Seevl. He doesn't get into the specifics of how he developed the hack in his blog post, but mentions using the APIs from Seevl, YouTube, and Twitter. He also calls out Twitter's card functionality as a feature enabling this type of DJ use to play inline.

This type of bot-powered recommendation isn't new to Twitter. Back in 2012 a music hackathon produced a similar type of app, Recco Bot, though it didn't integrate into Twitter's timeline as well because of API limitations. Plus there's accounts like Magic Pics, which delivers curated pictures based on tweets you send.

Beyond music, Passant sees this Twitter functionality as something that could pit the social service against Apple's smart assistant Siri. Citing Twitter's new "buy" ability, he writes that the data is there to extend deeper into people's lives.

What about "Hey @uber, pick me up in 10 minutes", and using the Tweet geolocation plus a Uber-API integration integration to directly pick and bill whoever #requested a black car? Or "Please @opentable, I'd love to have sushis tonight", and get a reply with links to the top-rated places nearby, with in-tweet booking capability (via the previous buy button)?

"The tech side of the Twitter DJ hack is actually pretty simple, but that's probably the beauty of it, and the way I see the future of the web," says Passant. "Mashing up APIs with user-driven interactions to build intelligence services--not only web pages or apps. It's even better when it's combined with Internet of Things-related concepts as well."

Give @Seevl a try--though there's the danger of it being rate limited, which gives "hang the DJ" a whole new spin.

Watson Analytics Puts Big Data Crunching At Your Fingertips

$
0
0

IBM, one of the most deeply conservative of all tech companies, just made a massive institutional move by investing millions of dollars into a freemium, cloud-based data visualization product and predictive analytics platform.

Today, the company is announcing Watson Analytics, which is designed for use inside sales, marketing, and human relations departments. From press previews given before launch, the platform seems to be directed squarely at current users of Oracle, SAS, and Tableau analytics products. Google also offers similar products via its Public Data Explorer and Fusion Tables.

Eric Sall of IBM Business Analytics told Co.Labs that the freemium product was created to get Watson "into the hands of as many people as possible," and that the company would charge for add-ons such as extra storage space and integration with other enterprise services. He added that the software was designed to give instant answers in a business context. While the company does not have smartphone apps planned at this time, the platform will be available to mobile users through an HTML5 interface.

It's also another signal of IBM's desire to put Watson, an intelligent computer known mainly for Jeopardy appearances and oddly delicious food recipes, into as many industry verticals as possible. The industry climate for the computing giant is bad enough that a book called The Decline And Fall Of IBM was released earlier this year; IBM has funneled $1 billion into Watson, partnered with third-party developers to build an app ecosystem for the product, and has begun aggressively positioning it within the health care industry.

For potential clients, IBM's selling point is that Watson lets anyone ask questions, in natural spoken or written language, that the software then answers based on a corpus of text it trains itself to read and categorize. A partnership with Bon Appetit magazine, for instance, automatically generates recipes based on a user's parameters; a prototype product being tested at the Cleveland Clinic lets clinicians turn electronic medical records (EMRs) into visual timelines of a patient's health.

The Watson Analytics platform, which the company characterizes as their biggest analytics announcement in a decade, is designed to automate data formatting, predictive analytics, and data visualization for employees without statistics, design, or programming backgrounds. Based on IBM's SoftLayer platform, the software package lets users upload data sets, which are automatically formatted and then can be queried either through natural language requests or visual manipulation. Although I was not able to try the platform prior to launch, it appears to operate in a manner similar to Siri's natural-language processing of information.

Because tablets and smartphones are increasingly becoming a part of the post-BlackBerry business environment, analytics providers are racing to make sure data can be processed on iPads or Androids just as easily as on a desktop computer.

IBM says beta testers will be able to try Watson Analytics within 30 days, and that a freemium model will be available starting in November 2014. Beta tests can be accessed here.

Apple's iBeacons Are Going To Transform Much More Than Advertising

$
0
0

Apple's iBeacons will be a boon to retailers who want to target their in-store customers with advertisements and special deals. But while the technology has generated a lot of excitement in the retail industry, it's understandably not caught the imagination of consumers, who generally aren't thrilled about the new ways they'll be marketed to in the future.

Yet, as I discovered, in-store advertising is just one use of iBeacon technology. Innovative developers are applying Apple's iBeacons to improve our social lives, make our smartphones more intuitive, and save us money on electric bills.

How iBeacons Work

iBeacons are Apple's implementation of Bluetooth beacon technology. A Bluetooth beacon is simply a low-energy chip enclosed in a small plastic housing. The beacon can only send data--not receive it--and is generally used to just broadcasts micro-location coordinates (in a radius as small as 10 centimeters) to your iPhone.

Because iBeacons can only send data, they have no way of controlling anything on your phone. The technology relies on apps collecting the data from an iBeacon and using it to do something. In the case of retailers, a store may place an iBeacon on a specific endcap with the newest television on it. The store's app on your iPhone will then receive the location data from the iBeacon and pull up a discount coupon or more specs about the TV (hoping that doing so will prompt you to buy it).

There's a big misconception that iBeacons only work with iOS devices. In fact, an iBeacon can talk to Android and other smartphones as well. But while iBeacons can be picked up by other smartphones, Apple's technology has an iOS-only feature: Any iPhone, iPad, or iPod touch can also become an iBeacon itself. And this is how some clever developers are discovering that iBeacons are good for a lot more than just selling you stuff.

iBeacons For Dating

"Our goal is to use technology to encourage people to be more social, in real life," says Joel Ayala, cofounder of Mingleton, when I talk to him about the way he's turning Apple's retail tech into a dating service. Mingleton uses the iOS-only feature of iBeacons. Installing the app on your iPhone turns it into an iBeacons transmitter. The app then broadcasts your profile out to other Mingleton users within a 50-meter radius. When another Mingleton user taps "See Who's Nearby" in the app your iPhones--both acting as iBeacons--ping a unique beacon identifier linked to your Mingleton profiles.

"Part of the problem is that it is not always clear who wants to socialize," says Ayala. "We have always been bemused by how little we like-minded concertgoers mingle before each set begins. This is why we came up with the idea to allow people in the same space to anonymously find out who wants to meet and to make it easy for people to discover their commonalities so they are more likely to want to socialize."

Mingleton uses the Facebook Graph API to see your mutual friends and interests and decide on whether or not to make the introduction. "If-- and only if--you both express interest in mingling, we let you both know," Ayala says. "If it's a match, the two of you may now message each other and will hopefully mingle in person."

It's similar to Tinder, but much more location-specific thanks to iBeacons technology. Where Tinder uses Wi-Fi location data and GPS signals to lock your range down to a mile or so radius, iBeacons allows Mingleton to accomplish location-based discovery for individuals in the same room.

iBeacons For Automating Your Apps

Launch Here uses iBeacons to make your home more aware. The iPhone app enables users to link apps with specific places in their home.

"It started as an experiment," says Bernd Plontsch, cofounder of Aww Apps, who makes Launch Here. "Taking a look around our homes we found that most rooms and objects serve very distinct purposes. The closer you get to a specific spot in a room the more probable it gets that it is related to your current intent or context. Then, looking at our phones we realized that we already use numerous apps covering activities related to those very contexts. With Launch Here we simply link those two worlds together by helping you to launch these apps from your lock screen quickly at the right place."

If an iBeacon is placed by your couch in your living room, for example, Launch Here can guess that your intent might be to use your Apple TV, so it brings up Apple's Remote app without you having to search for it. Place an iBeacon at your desk, and when you sit down at it the Launch Here app can then pull up your favorite office productivity app on your iPad.

"For some people Launch Here works simply as a little daily time saver," says Plontsch. "For others it makes using apps as a whole more accessible by showing them relevant apps in the right situation without the need to actively remind them of such fitting choices."

iBeacons For Controlling Your Home

The usefulness of iBeacons in the home grows exponentially if all your devices have beacons integrated into them. If, for example, your lamps and your oven and your coffee maker all had an iBeacon inside, those devices could then guess your intention as you approach them: The lamps could turn on; the oven could begin to warm; the coffee maker could start brewing.

Integrating iBeacons into every home appliance is still a long way off, but for now there's a company called Zuli that is turning our boring, dumb devices into smart iBeacon transmitters. The company makes the Zuli Smartplug--essentially an adapter for a normal electrical outlet that contains an iBeacon. Once connected to an appliance like a lamp or a coffee maker, the plug uses its iBeacon to transmit its identity to the Zuli app, which in turn automates the specific actions you've preconfigured.

The Zuli Smartplug uses an iBeacon to make your dumb device smarter.

"When you think about the direction the smart home is headed in, we now have the ability to connect almost every device to our smartphones, but yet we still haven't solved the user experience side of it," says Sid Bhargava, founder of Zuli. "Using a connected light switch as an example, it's almost easier to just get up and turn on the light than it is to pull out your phone, unlock it, and then open the relevant app."

Zuli, if fully implemented, could adjust your lighting ambience, temperature, and music simply by you walking into a room. It can also make that room more energy efficient by detecting when it's unoccupied for 10 minutes and shutting down unused devices.

iBeacons For Hacking

All of these implementations of iBeacons require the user to have transmitters in their location. Mingleton accomplishes its task by turning your iPhone itself into an iBeacon. Zuli does it by integrating an iBeacon into its smart plug.

But what of Launch Here? While it could interact with an iOS device acting as a beacon, it's an app that requires stationary iBeacons to be placed around the home. Thankfully there's a company called Estimote that sells a $99 development kit with three Estimote Beacons (which are iBeacon-approved). The company also sells a dev kit with 10 Sticker Beacons, which are small iBeacons it created that allows you to affix them to non-stationary objects like a dog's collar.

A full-sized Estimote Beacon. Inside the beacon housing is a Bluetooth chip and small battery.

Affixing an expensive iBeacon to a dog's collar might sound a bit over the top, but what if that collar could then auto-unlock a doggie door as your dog approached it?

"Honestly there are thousands of use cases for beacons that have not yet been even discovered," says Steve Cheney, cofounder of Estimote. "Imagine if a beacon knew you were in the living room and your smart set-top box changed the content on your television based on your preferences or the video you were streaming on your iPhone before you walked in the room?"

Indeed, the imagination of developers seems to be the only limit of what iBeacons can enable outside of the retail space.

"The beauty of iBeacon is that it's up to developers and product designers on what to build. Just as the original people who conceived GPS could never have imagined an Uber arriving to your door and becoming a killer service for GPS, people can't yet fully grasp what iBeacons will enable," says Cheney. "I think the broader community is only now really grasping the types of experiences that can be built."

And those iBeacons experiences, thankfully, go much further than retail.

Tired of Waiting For Siri API, Developers Take Matters Into Their Own Hands

$
0
0

Ever since Siri's arrival three years ago, developers have been itching to work Apple's voice control into their own apps. Alas, the feature has remained stubbornly trapped inside Apple's operating system with no open API in sight. Luckily for developers, not everyone is content to let speech recognition stay locked in proprietary boxes.

Api.ai is a voice control programming interface from the folks at Speaktoit, the creators of an artificially intelligent personal assistant app. The new natural language API promises to be "the most advanced tool that allows developers to design and integrate speech interfaces into their solutions in a matter of minutes."

The API opens up the company's underlying speech recognition and voice control technology and allows it to be used by third-party developers on iOS, Android, or the web. It boasts a super-intuitive interface for defining custom voice commands tailored to each app's functionality. It's not just for smartphones, either: Api.ai is also designed to work with wearables, robots, smart home platforms, and other connected devices. The new API is clearly aiming to help enable a voice-controlled Internet of Things, much like its competitor Wit.ai.

With Api.ai, developers will be able to do things like add voice recognition to a third-party music app or let users control their smart thermostats by speaking. It could be especially useful for third-party messaging, weather, and calendar apps, since those are things people commonly use voice control for (but that functionality is lost the minute somebody switches away from Apple's default apps).

Of course, the voice control enabled by Api.ai or an API like it won't mimic Siri or Google Now 100%. For one thing, it won't have the same deep, OS-level integration enjoyed by the players that control the OS and hardware. For a while there, a project called SiriProxy had effectively hacked Siri to enable all kinds of custom voice controls in third-party apps and smart objects. It was fun while it lasted, but Apple plugged the loophole with iOS 7 and the project appears to be dead in the water. More recently, students at the University of Pennsylvania crafted a new Siri hack called Googolplex that lets users control Spotify, Venmo, Instagram, and Hue light bulbs.

Hacks like these open up Siri's functionality, but not as extensively or reliably as an official, open API would. Until that day comes, third party services like Api.ai are going to have to cut it for developers.

Arizona State University Listens To The Most Spotify Music

$
0
0

Last semester, music streaming platform Spotify launched a special $4.99 student rate for its premium service. As a result, the company is able to gather some fascinating data about university students' listening habits. Today the company released a list and map, ranking university streaming volumes and outlining preferred genres at different institutions.

Out of the 40 schools that host the most Spotify Student Premium members, Arizona State University students listen to the most songs--while Virginia Polytechnic Institute And State University students listen to the fewest.

We already know that streaming music provides a lot of data insight. Looking at students' listening habits at a particular college can tell a very different story than an official prospectus.

NYU, for example, tends to listen to more slowcore and hipster playlists than the other schools. Ohio State listens to the most classical music. And University of Colorado Boulder was the school listening the most to playlists meant to help focus and relax.

The data goes beyond the frequency and genre of students' listening habits. Much can be learned about a school's culture by studying the times of day students listen to music.

"We saw quite a bit of diversity in listening behavior, especially in the distinctive tracks and artists that define a school's taste," said Paul Lamere, director of developer platform for The Echo Nest at Spotify. "I was also really interested in getting insights into the sleep/wake cycles at these schools through music--it's interesting that some schools stay up late, some get up early, and others do both, burning the candle at both ends as it were."

Regardless of sleep or wake time, 4:00 p.m. is the peak listening time across all universities.

You can check out interactive pages for all the individual schools Spotify surveyed here.

Can Ebola Be Stopped By Treating It Like A Terrorist Network?

$
0
0

Six months after its latest resurgence, the Ebola virus shows no signs of letting up. "We desperately need new strategies adapted to this reality," said Dr. Joanne Liu, international president of Doctors Without Borders in a grim statement last week. One hope is that data, which can spread faster than disease, could give humans a technological leg up on the spread of the epidemic. The problem with this data is that it's massive and often unstructured.

Can scientists and medical professionals make sense of the mess in time for it to make a difference?

The answer may lie in data-mining techniques that were previously used by the U.S. military to track terrorists. Modus Operandi is a Florida-based defense contractor that specializes in big data analytics and semantic analysis. The company has long partnered with clients like the U.S. Marine Corps to track people--in this case, terrorism suspects--as they travel throughout the world.

"This translates well into bioinformatics because instead of terror networks, you're trying to figure out an infection network," Eric Little, vice president and chief scientist at Modus Operandi, tells Co.Labs. "You're using heuristics to look at where it's popping up and how it's being passed. Who's connected to whom? How did the infected person travel? Who did they come in contact with?"

Having used this technology for military purposes--Little won't disclose how it was used by his defense clients--he sees a natural progression not just toward headline-grabbing epidemics like Ebola, but to diseases as common as cancer. "Ultimately, we're in the business of threats," says Little. "That's what we do."

In the case of a disease like Ebola, data used to track the spread of the disease can come from any number of sources, starting with tissue samples and medical reports taken in the field. Factor in information from medical labs, NGOs, public research, and private institutions and you have a pretty hefty mess of data that comes in any number of different formats, if it's even structured at all.

"Using semantic technologies and semantic reasoning, we're able to take a lot of the computation out of the scientists' heads and put it into the system itself," says Little. "We literally code in some of their knowledge against the data itself."

Platforms like BioIQ, the disease-tracking tool currently in development at Modus Operandi, aim to normalize the data, visualize it using charts and combine it all to create a digital model of a real-world problem, or what Little calls an ontology.

"Ontologists look at what things are," says Little. "How do you describe them? How do you model them? There are spatial parts to it. There are temporal parts to it. The spatial parts are dependent or independent. Things have attributes. There's all these complex relationships."

Once the outbreak is modeled and graphed using all these disparate sources of complex data, the software is able to use its own propriety algorithms to query the data, create rules, and run computations to reveal relationships and developments that may not have been easily uncovered previously.

Distribution of cases by affected countries, from WHO Ebola Response Situation Report 3

This approach also has the advantage of leveling the information playing field. A medical epidemic involves different scientists and researchers all well-versed in their own fields and jargon, but details can get lost in translation between disciplines. These silos of expertise have a way of hampering collaboration, which is annoying in any research environment--but potentially deadly in this one.

"If you're a virologist and somebody is running a genomic sequence, as a virologist you probably can't deal with or read the sequence data," says Little. "You're not a genomics expert. The genomics expert is not an expert in virology. None of them are probably actually physicians that are treating the patients."

The hope is that by merging all the data in one place, analyzing it, and turning it into visually digestible graphs, BioIQ can make the data accessible to everyone who needs to work with it, regardless of their background.

One of the most important things machines look for in a case like the Ebola outbreak is how the disease spreads geographically. Perhaps the most eye-opening illustration of how location-powered data science can be used in a scenario like this is HealthMap, whose algorithm detected the current outbreak before it was publicly announced. While BioIQ isn't as far along on the health mapping front as HealthMap (nor does it rely on social media data like HealthMap does), geographic intelligence is an integral part of the platform.

It sounds promising, but Modus Operandi in racing against a deadly clock. Scientists have created computer models showing it's going to take at least 12 to 18 months to get the Ebola epidemic under control. BiolQ, meanwhile, is still about 12 months from being field-tested. Even that estimate "depends on the customer and the amount of testing that has to occur for the system to be deployed in real live use cases," Little says.

Inside Baidu's Plan To Beat Google By Taking Search Out Of The Text Era

$
0
0

Text-based search has been the input of choice for web search engines for the past 24 years. That's soon going to change.

Baidu, China's biggest search engine, recently hired former Google Brain mastermind Andrew Ng to head up a massive deep learning project. Focused on building an infrastructure for solving problems like image recognition and speech processing, Baidu's work signals a paradigm shift in the way users retrieve information online.

Ng was announced as Baidu's new head of research back in May, working out of the company's Silicon Valley offices. One of his first big projects with Baidu is creating a vast deep learning computer cluster with around 100 billion digitally simulated neural connections. By harnessing the power of deep learning, Ng hopes to revolutionize the way we carry out search functions.

"With the Google Brain project we made the decision to build deep learning processes on top of Google's existing infrastructure," he says. "What we're doing at Baidu is seizing the opportunity to build the next generation of deep learning infrastructure. This time we're building everything from the ground up using a 2014-base GPU infrastructure."

Baidu has given Ng room to work on some of the biggest deep learning problems around. "From the engineers through to the executives, I think everyone at Baidu really 'gets' this field," he says. "Deep learning is a very capital-intensive area, and it's rare to find a company with both the necessary resources and a company structure where things can get done without having to pass through too many channels and committee meetings. That's essential for an immature technology like this."

Andrew Ng, one of Fast Company's Most Creative People in Business

The primary catalyst for a step-change in how search works today is the rise of smartphones and tablets, which are taking away more and more market share from traditional PCs. This is particularly evident in countries like Baidu's birthplace China, where many users are connecting to the Internet for the first time--primarily by way of mobile devices. Of the 632 million Internet users in China as of June this year, 83% accessed the web with a mobile phone, according to figures from China Internet Network Information Center.

Most of these users haven't organically learned how to use text-based search as it's evolved from Ask Jeeves to DuckDuckGo over the past several years. That presents an opportunity to re-think basic assumptions about search, and it extends beyond developing markets. "Text input is certainly useful, but images and speech are a much more natural way for humans to express their queries," Ng says. "Infants learn to see and speak well before they learn to type. The same is true of human evolution--we've had spoken language for a long time, compared to written language, which is a relatively recent development."

In many cases, text-based search is not ideal for finding information. For instance, if you're out shopping and spot a handbag you might like, it is far better to take a picture than to try and describe it in words. The same is often true if you see a flower or animal species that you would like to identify.

Fortunately, more and more of our devices now have high-quality cameras built in--from smartphones with front- and back-facing cameras to wearables like Google Glass or the recently announced Baidu Eye.

At the same time, deep learning tools are becoming more adept at intelligently recognizing and decoding visual information. "Previously we thought about modalities like language and images having different, separate representations," says Edward Grefenstette, Fulford junior research fellow at Somerville College, and an AI Researcher in the Department of Computer Science at the University of Oxford. "With deep learning there has been a movement toward what is called distributed representations. This allows us to do things like align the meaning of two different languages, or language and image, in the same representational space."

That means if there is a new image that has never been seen before, deep learning breakthroughs make it possible to generate text describing what it is--based on an "understanding" of what is being shown. (Check out an impressive demo by the University of Toronto here.)

The results of this research are already starting to become visible. Earlier this year Facebook created DeepFace, a facial recognition system almost as accurate as the human brain. Google has also made significant advancements in the field of deep learning, even after the departure of Andrew Ng. Executed correctly, Baidu's work has the potential to be a key part of one of the biggest AI breakthroughs ever.

It's not just image recognition, either. "Deep learning has pretty much taken over speech recognition," Ng says. At Baidu, error rates for speech recognition are down by about 25% as a result of deep learning research.

At the moment, around 10% of Baidu search queries are done by voice, with a much smaller percentage carried out using images. If progress continues at its current rate, however, Ng forecasts that "in five years time at least 50% of all searches are going to be either through images or speech."

"Replacing text search by voice search is clearly to happen more and more, as speech recognition improves," says Yoshua Bengio, a professor at the Department of Computer Science and Operations Research at the University of Montreal, home to one of the world's largest concentrations of deep learning knowledge.

Andrew Ng is under no illusions about the challenge his team faces, though. Deep learning is still a new field--and despite its massive potential it can be the victim of unnecessary and unhelpful hype.

"I believe that we have not yet exploited the power of deep representation learning--and especially of the unsupervised type--and that the impact in applications could be very important a few years down the road," says Bengio. "Basic research is needed for this to happen, though. Some of [this] might happen in industrial labs, as leading researchers there--including Andrew Ng, Geoff Hinton, and Yann LeCun--basically agree that this is an important opportunity for major future progress."


What Are Smart Contracts? Cryptocurrency's Killer App

$
0
0

What if you could cut your mortgage rate, make it easier to update your will, and ensure that your buddy was never able to weasel out of paying up on a bet? That and much more is the promise of smart contracts, a technology that is getting closer and closer to reality thanks to cryptocurrency.

Smart contracts are computer programs that can automatically execute the terms of a contract. Someday, these programs may replace lawyers and banks for handling certain common financial transactions.

And the potential for smart contracts goes way beyond simple transfers of funds. The door of a car or a house could be unlocked by connecting smart contracts to the Internet of everything. But as always with this cutting edge of financial technology, major questions abound: How will this all align with our current legal system? And, of course, will anyone actually use these things anyway?

What Is A Smart Contract?

The idea of smart contracts goes way back to 1994, nearly the dawn of the World Wide Web itself. That's when Nick Szabo, a cryptographer widely credited with laying the groundwork for bitcoin, first coined the term "smart contract." At core, these automated contracts work like any other computer program's if-then statements. They just happen to be doing it in a way that interacts with real-world assets. When a pre-programmed condition is triggered, the smart contract executes the corresponding contractual clause.

Szabo's original theories about how these contracts could work remained unrealized because there was no digitally native financial system that could support programmable transactions. (It defeats the purpose of smart contracts if a bank still has to manually authorize the release and transfer of money.) "One big hurdle to smart contracts is that computer programs can't really trigger payments right now," says Phil Rapoport, Ripple Labs' director of markets and trading.

The advent and increasingly widespread adoption of bitcoin is changing that, and as a result Szabo's idea has seen a revival. Smart contract technology is now being built on top of bitcoin and other virtual currencies--what some have termed "Bitcoin 2.0" platforms. Because bitcoin is itself is a computer program, smart contracts can speak to it just like they would any other piece of code. The puzzle pieces are falling into place. A computer program can now trigger payments.

There are currently two major open source projects working on smart contracts, both of which have taken big leaps forward this year. One is called Codius and the other is Ethereum. Codius was developed by Ripple Labs, which also created its own digital currency called Ripple. Codius aims to be interoperable between a variety of cryptocurrency, such as Ripple and bitcoin, although it is managed by the private company.

"Codius can interact with other ledgers and web services. It can work on bitcoin and it can work on any other system," says Stefan Thomas, Ripple's CTO.

In contrast, Ethereum is an entirely new currency with smart contracts baked into its payment system. Originally developed by 20-year-old programmer Vitalik Buterin, it would replace other "coins" like bitcoin, but appears to be more of a community project.

Cryptocurrencies like bitcoin are poised to help smart contracts become reality. But the effect may also be reciprocal. Smart contracts can illustrate a unique benefit of virtual currencies that some advocates think could entice more users.

"Smart contracts are really the killer app of the cryptocurrency world," says Chris Ellis, host of a show about cryptocurrencies on the World Crypto Network.

Automating Simple Transactions

Let's take a simple example, like a Super Bowl bet. Say you want to bet $500--or roughly one bitcoin--that the Patriots will win, while your friend is betting the same amount that the Packers will take the title. Step one is for you and your friend to place your bitcoin in a neutral account controlled by the smart contract. When the game is over and the smart contract is able to verify via ESPN, Reuters, or elsewhere that the Patriots beat the Packers, the smart contract would automatically deposit your bet and your winnings from your friend back into your account.

Because smart contracts are computer programs, it would be trivial to add more complex betting elements like odds and score differentials into the mix. While there are services out there today that might handle this sort of transaction, they all charge a fee. The key difference with smart contracts is that it is a decentralized system accessible to anyone, that doesn't require any intermediary party.

A more everyday example would be online shopping. "If you order something online you might not want to pay a merchant immediately until they fulfill their end of the bargain," says Rapoport. "So you could easily have a contract that looks for FedEx tracking data saying that the package you ordered has been delivered to your address before releasing payment to the sender."

Say Goodbye To Lawyers And Banks?

If you think about a lot of routine financial transactions, what lawyers and banks do boils down to repetitively processing mundane tasks. And yet we still have to shell out huge fees for lawyers to go through wills or for banks to process our mortgage payments.

Smart contracts could automate and demystify these processes, making it so that ordinary people can save time and money.

Although you got your mortgage through a bank, that bank won't generally hold onto it for the entire 30-year loan; it will be sold to an investor. But you keep making payments to the bank, not the investor that owns your mortgage. The bank just becomes a processor for your monthly payments, sending a chunk to the investor, a slice to taxes, and a bit for homeowner's insurance.

"That's just a real simple operational task, but that bank will often take a quarter to a half percent per year to service that mortgage," says Rapoport. "They're just doing an operational headache of receiving payments and redirecting them. And they're charging people for that. But it's something that a smart contract could theoretically administer very easily."

If mortgage payments were handled by smart contracts, mortgage processing fees could be eliminated and that savings passed on to consumers. The result would be a lowered cost of home ownership.

Although smart contracts are still in their nascent stage, the potential is clear. If a simple enough user interface were developed it could remove a host of legal headaches, like updating your will. Imagine if allocating your assets after your death was as simple as moving an adjustable slider that determines who gets how much. Just like with the bet or FedEx example, once the smart contract can verify the triggering condition--in this case, your death--the contract goes into effect and your assets are divvied up.

With all this, it may sound like we won't need lawyers anymore. But enthusiasts say that smart contracts should be seen as an evolution of the legal system, not its erasure.

"We don't think that this will replace the legal system as much as provide an intermediate layer between transacting and going to court," says Thomas.

Nonetheless, the role of lawyers might look very different in the future. Rather than having lawyers adjudicate individual contracts, the role of lawyers might shift to producing smart contract templates on a competitive market. Contract selling points would be their quality, how customizable they are, and their ease of use. It sounds a bit like the marketplace for WordPress themes.

"I imagine a lot of people will create contracts that do different things," says Rapoport. "And they can essentially sell them for others to use. So if you make, for example, a really good equity agreement that has a bunch of different functionality a company can charge for access to their contract."

Smart Property And The Internet Of Things

It's easy to think about a smart contract managing a will, up to a point. It all makes sense if you can imagine yourself keeping all of your assets in bitcoin. But what if you live in the real world and have physical possessions like, you know, most of us? The answer is something called smart property.

"This starts to get more sci-fi when we talk about smart property," says Ellis.

The so-called "Internet of Things" is constantly growing, with more and more interconnected devices out there every day. Some forward-thinking developers are already working on ways to combine the Internet of Things with bitcoin infrastructure so that something like a bitcoin can actually represent a physical object. That token is what these developers call smart property.

But more important than representing some object, these new smart property tokens would actually grant ownership and control to a networked object, whether that be a computer, a car, or even a house.

How does this all come together?

Ellis gives the example of renting out his house. "Let's say all the locks are Internet-enabled and they've all got network connections. When you make a bitcoin transaction for the rent, the smart contract you and I agreed to automatically unlocks the house for you. You just go in using keys stored on your smartphone."

A smart contract would also make it trivial to set up dates when those digital keys would automatically expire. It sounds a bit like Airbnb without the need for Airbnb.

And if you think about it, that's the fundamental transformation smart contracts are after. A service like Airbnb is desirable because it obviates the need for the host and the guest to trust each other--they both only need to trust Airbnb. If the guest doesn't pay up, or the host doesn't leave the keys, either of them can take it up with Airbnb.

Doing the same sublet with a smart contract would supplant a business model like Airbnb's. The homeowner and renter still don't need to trust each other--they just need to trust the smart contract. Smart contracts would decentralize the model of who needs to be trusted. And in doing so, it would cut out hefty fees by brokering services like Airbnb.

But smart contracts don't have to just disrupt existing business models. They can also complement them. Way back in his '94 essay, Nick Szabo envisioned the idea of smart property writing that "smart property might be created by embedding smart contracts in physical objects." His example of choice was a car loan, writing that if you miss a car payment, the smart contract could automatically revoke your digital keys to operate the car. No doubt car dealerships would find this appealing.

Justice For Poor People?

Admittedly, at some point it does start to sound like the makings of a dystopian sci-fi film. If you can't make a payment all of a sudden your car could be digitally and remotely repossessed, all without any human interaction.

But in theory, the upside is that financial institutions should be more willing to take risks on people who might not otherwise get loans. Because, worst case scenario, if someone can't pay up, it's inconsequential for the bank to take back the asset in question.

In addition to expanding opportunities to get credit, smart contracts also have the potential to open up access to the legal system for disadvantaged people who might not otherwise be able to reap its benefits. Thomas believes that smart contracts "will make the legal system available to people who might not be able to afford it on their own."

Although the law in theory treats everyone equally, you more often than not need money to take someone to court over a breach of contract.

"At present justice really only works if you can afford a lawyer to enforce that agreement. So once smart contracts have the ability to enforce agreements on their own it will be game-changing. " says Ellis.

Of course it may not play out that cleanly in reality. While this all sounds good and noble in theory, it's impossible to predict how a smart contract would hold up in court if it were ever challenged. Dethroning lawyers as the high priests of arbitrating contracts is certainly appealing. But do we run the risk of just replacing literacy in legalese with literacy in code?

Rapoport acknowledges that there may be drawbacks. "Everyone reads English, so in some ways it's easier to read a traditional contract. But this is still very bleeding-edge technology, so who knows what kinds of user-facing improvements will be made eventually?"

Despite unforeseen pitfalls, the promise of smart contracts is clear. Right now we're waiting to see if either Ethereum or Ripple's Codius will be able to become usable and really take off.

"Right now there are lot of clever people working on this who are high on ideas because they can see the potential," says Ellis "What we don't know yet is who is going to win this race--Ripple or Ethereum. It's a bit like VHS vs. Betamax."

Starting Today, Your iPhone Can Ditch Google For DuckDuckGo's Private Search

$
0
0

Even if you're not springing for an enormous iPhone 6 Plus, your device is about to get a refresh in the form of iOS 8. While new features like widgets and third-party keyboard support have gotten most of the attention, iOS 8 has a big perk waiting for privacy-conscious users as well.

The mobile version of Safari will now let you change the default search engine to DuckDuckGo, the privacy-obsessed Google alternative that has seen a sharp uptick in activity since Edward Snowden became a household name. By default, DuckDuckGo does not track its users' search activity or even log their IP addresses.

"It's great to see Apple championing privacy by adding our anonymous search option to protect Safari's users," says DuckDuckGo founder Gabriel Weinberg.

The integration is pretty simple. Once iOS 8 is installed, Safari's settings will include DuckDuckGo alongside Google, Bing, and Yahoo as an option for the browser's default search engine. Select DuckDuckGo and from that point forward, any search conducted from within Safari will show results from DuckDuckGo rather than one of its giant competitors.

Of course, DuckDuckGo already has its own iOS app, which is free to download and use. But what's notable about this integration is the sudden exposure it offers the tiny suburban Philadelphia-based startup and its underdog, stick-it-to-Google service. Simply having its name inside the settings of a platform used by hundreds of millions of people will ensure a rise in search queries. For a startup this small, the payoff could potentially be huge.

That isn't to say that DuckDuckGo hasn't already been growing. Its search traffic (which the company measures in queries per day) was already steadily on the rise before last June when the National Security Agency's domestic surveillance programs were first unveiled by Edward Snowden and The Guardian. As details came to light about the NSA's spying--in which American tech companies were pressured to be complicit, DuckDuckGo's traffic began to skyrocket. Aside from a few minor fluctuations, the site's activity hasn't stopped climbing since.

DuckDuckGo's overall market share remains minuscule compared to that of Google or even Yahoo. Still, its rapid rise in popularity is a noteworthy, quantifiable symbol of growing concerns over online privacy, which have yet to be assuaged even 15 months after the NSA spying story broke.

"Our primary goal remains unchanged: To deliver a search engine with smarter answers and real privacy to as many people as possible," says Weinberg. "Being part of Safari is a huge step in achieving this goal."

What's Homomorphic Encryption And Why Did It Just Win A MacArthur Genius Grant?

$
0
0

Craig Gentry, a cryptographer working at IBM's Thomas Watson Research Center in the suburbs outside New York City, recently received a phone call that changed his life. His passion, an experimental and mainly theoretical type of encryption called homomorphic encryption, just won a MacArthur "Genius Grant."

The complicated encryption method lets users run programs without actually decrypting them. Paul Ducklin, a security researcher working for Sophos, laid out a neat summary of how this works:


Imagine, however, if I could simply take your encrypted search terms, leave them encrypted, search for them directly in the still-encrypted database, and get the same results.
If I can perform calulations directly on your encrypted data, yet get the same results that you get from the unencrypted data, we both win enormously from a security and privacy point of view.
You don't need to give me any decryption keys at all, so you no longer have to trust me not to lose, steal or sell your data. (You still have to trust me to tell you the truth about any results I work out for you, but that is a completely different issue.)
And I no longer need your decryption keys, so I can't lose or abuse your data even if I wanted to.

For security-conscious cloud and SaaS providers, this is a very big deal. Gentry has been working on homomorphic encryption for years, and the first big steps to commercialization came out last year when IBM released an open source software package for developers called HElib. The HE stands for homomorphic encryption.

John Launchbury, a DARPA program manager, told Co.Labs that "Originally cryptography was all about keeping communications private. Then it became standard to use cryptography for securing stored data, in case someone steals your computer. Now with the prevalence of cloud computing, it is becoming clear that we also need to be serious about data confidentiality even while computing with it--in case someone is able to observe the computation as it proceeds."

"Homomorphic encryption," he added, "Is one way to enable this: it is a form of encryption that allows computations to be performed on data without having to decrypt the data. You could store information on a cloud server, have the cloud provider perform some tasks on the data, without the cloud provider ever learning anything about your data. This could have profound implications for improving our privacy. Unfortunately, the performance challenges are so serious that it cannot yet be used in practice."

Writing back in 2009, security expert Bruce Schneier explained that homomorphic encryption is important because it could potentially make security much easier for distributed software systems:


Any computation can be expressed as a Boolean circuit: a series of additions and multiplications. Your computer consists of a zillion Boolean circuits, and you can run programs to do anything on your computer. This algorithm means you can perform arbitrary computations on homomorphically encrypted data. More concretely: if you encrypt data in a fully homomorphic cryptosystem, you can ship that encrypted data to an untrusted person and that person can perform arbitrary computations on that data without being able to decrypt the data itself. Imagine what that would mean for cloud computing, or any outsourcing infrastructure: you no longer have to trust the outsourcer with the data.

Although Schneier went on to be critical about practical applications for homomorphic encryption (which, to be fair, was written years ago), IBM has been taking out patents on the method that hint at eventual commercialization.

Gentry didn't invent homomorphic encryption, but his research is going a long way to making it usable. Over the next five years, Gentry will receive a no-strings-attached grant of $625,000 from the MacArthur Foundation to follow his passions. In a few years, if his work makes its way to the marketplace, it might solve a lot of our current problems with privacy protection and data security.

Today in Tabs: #TabsSeason2, More Parts Per Million

$
0
0

Welcome back, I hope everyone had a good summer, and by "good" I mean I hope you didn't get tear-gassed or beheaded. It's 2014 and the bar is that low, so lets get started with


Today in Hot Garb: Teenage sartorial regret factory Urban Outfitters was recognized for its commitment to hot garb this week when it tried selling an apparently bloodstained and bullet-holed Kent State sweatshirt. These dudes were probably into it, but UO pulled the incredibly ill-advised item and "apologized" very quickly. This "mistake" follows literally dozens of similar "mistakes" because "troll the internet for outrage," and "resell junk we bought at flea markets to idiots" form the two pillars of Urban Outfitters business model. In other hot garb news: this guy figured out that wearing really ugly pants is an acceptable substitute for having a personality in tech circles, and got a job at Google in a sort of double KO to the nuts of Silicon Valley douchebaggery. And as always, Bed Bath N Beyond is a House of Lies.

Nick Denton seems to be walking back his original intention to make Kinjaa platform specifically tuned for Matt Buchanan's vaping pleasure. Now he's saying it's for the independent blogger who needs to start "getting into the rhythm" and "taking the friction out" to achieve "intimacy at scale." I'm all a-tingle. The Observer's Rusty Foster correspondent Kara Bloomgarden-Smoke momentarily left her beat to report that Gawker is moving up to the Flatiron District where all the cool new media kids are. Once word was out, Gizmodo reported that the new Gawker Media offices would be organized around the hot new digital media concept of "metaverticals" and would consist of "a range of spaces that lead from personal to collective." Meanwhile Denton told Matthew Ingram that he realizes Kinja has been terrible so far but this time it'll be different.

Hey have you heard about GamerGate? No? Well please skip this whole graf and live the rest of your life in bliss. For those of you still reading, Cracked tells you all you need to know in a single paragraph's worth of editor's note, and then goes on to give you Zoe Quinn's "5 Things I Learned as the Internet's Most Hated Person" as a bonus, which is not just interesting if you're a gamer.

Today in the Worst People on Earth:Proud Ray Rice fans. Members of the Richard Dawkins cult. Brands. Brands. Catfish host Nev Schulman. Bad memoirist Nev Schulman. White dudes. Palins. Fareed Zakaria, who literally can't live without plagiahol. Fox News, because every night guys, hosts are zeroing in on #benghazi.

Today in Wat:Molly Ringwaldhas a new advice column in The Guardian? Will Lena Dunhamever stop confessing? Can she really have killed all those people? 9/11 vs. losing your virginity? Is The Awlsubtweeting Grendan here? If you think about it, death is just another way to start a new new life as a corpse. Government retireehas great sex.

Apple announced its smart watch, and Nick Sweeneywrote the only thing worth reading about it. Personally I'm just gonna get one of these iPad face rigs and jack in to the 'net for good.


Here in #TabsSeason2 we have a lowly intern who I will grudgingly allow to contribute a tab to each issue, so here is the first ever:

Intern's Tab, by Bijan: When you wish upon a tab, nothing good happens; or, nothing good usually happens. Or I don't know what I'm talking about because I'm the intern, or there's not really a difference and nothing matters anyway. In today's travels across the wide, hot-garbage-smelling expanse of the 'net, I found salvation in the form of cats and the killer duo (#TabsSeason2) Killer Mike & El-P's project Run The Jewels. RTJ's second album (Run The Jewels 2) is due out on October 28; someone-and, as of this writing, 324 of their best friends-is kickstarting a cat version of the album. Like, re-recording the entire thing with meows. The internet is dead. Long live the internet! Dang.

Not bad Bij, but maybe tighten it up a little next time, ok Mr. Dickens?

Today's Song: Rich White Ladies, "Wimbledon"

Today's Questionable Cover: Miley Cyrus, "Babe I'm Gonna Leave You"

~I can hear tabs callin' me back home...~

Today in Tabs is now brought to you by Fast Company Laboratories, proudly formulating fast companies since 1886, and may be read on the world wide web at that place. You may also subscribe by email if you want to. It wouldn't hurt to do both. It wouldn't hurt me, I mean.

The World's Happiest iOS 8 Keyboard Predicts The Perfect Emoji For You To Use

$
0
0

Apple's new mobile operating system, iOS 8, is available for download today. One of the best features is the inclusion of third-party keyboards to replace Apple's default.

David McKinney, the developer of the Product Hunt iOS app, is one of the many offering up alternatives. What makes his special? It's a keyboard that auto-suggests emoji as you type.

Instead of hunting through the vast library of available images to put in your text, Emoji Type pulls in a list of suggestions for different words. If you write "dog," for example, the top row above the keyboard will show you all the different emoji pertaining to dogs.

"I had to create a custom dictionary of several thousand words that I then mapped to all the different emojis," says McKinney. "At the moment it uses single words, but I have full phrase detection and emoji combos coming soon."

The predictive emoji keyboard was written completely in Swift, Apple's new programming language, which presented a few additional development challenges. But the biggest hump to overcome was using the updated keyboards API.

"Most of the hardest stuff was working around the keyboard extensions API, which is still brand new," McKinney says. "Apple has done a great job with this, but as it is new everything has been a matter or trial and error to figure out what's possible."

Emoji Type is in the review process and will be released within the next few days as Apple clears iOS 8 apps for release. Even though Android users have had this functionality through keyboards like SwiftKey for a while, this will be the first time it's available on iOS.

In addition to third-party keyboards, iOS 8's default keyboard has also been given a functionality update with QuickType word suggestions. Regardless of whether or not you'll be heavily integrating emojis into your texting flow, new updates around Apple's typing experience are going to get a whole lot better.

These Athletic Wearables Aim To Stop Injuries Before They Happen

$
0
0

With football-related concussions in the news--one-third of NFL players suffer from brain trauma--and a nation of runners eager to dodge injuries, a new breed of wearable fitness tools aims to not only track calories burned but also blows to the head and strain on the calves.

The challenge these devices' makers face, according to them and other experts, is providing reliable alerts to help keep athletes healthy without unnecessarily sounding the warning bells when they're playing safely.

"You can't just bring a person into a lab and slam stuff into their head and see what happens," says Benjamin Harvatine, the cofounder and CEO of Jolt, which is in the midst of a Kickstarter campaign to fund a clip-on head impact monitoring device.

The idea for the Jolt sensor came to Harvatine after he sustained a serious concussion himself in college wrestling practice but, attributing his dizziness to hunger or dehydration, continued to wrestle, sustaining more blows to the head throughout the session.

"When I went to stand up, I couldn't really stand up right," he says. The most severe symptoms persisted for about five months. "It was a situation where I felt like I would have really benefitted from something that would really quantify, was I dizzy because of head impact, or not?"

Jolt's sensor, which is designed to be mounted on a helmet, headband, or other athletic headgear, tracks the level of head impact athletes sustain and relays those measurements in real time to a companion smartphone app, so coaches or parents of younger athletes can see what's happening to players' heads.

The app also includes a concussion symptom checklist and cognitive assessment test coaches can give players to see if their thinking is clouded after impact. Jolt will encourage coaches to give the players the tests regularly, even when they haven't taken any serious blows to the head, to help establish a baseline and potentially detect any cumulative effects of smaller injuries.

"Every day after practice, or once a week, they can take this test in the app," Harvatine says. "We can start to watch this data, and see if we pick up on any trends medical folks haven't picked up on yet."

Similarly, a helmet insert from Reebok, called the Checklight, uses green, yellow, and red lights to quickly signal when a player has taken a moderate, heavy, or no impact to the brain. When the light glows yellow or red, Reebok recommends the player get checked out, says Paul Litchfield, head of Reebok Advanced Concepts.

"Go through a screening process that is appropriate to whatever environment you're in," he advises athletes, explaining the device uses a formula based on linear and rotational forces applied to the head and other factors.

And other devices can measure potentially harmful forces applied to the rest of the body. GestureLogic's LEO LegBand, which successfully raised more than $140,000 in an Indiegogo effort concluded last month, can warn runners and cyclists to reduce the level of impact on their legs, or to stop and take a drink when they become dehydrated. Since the leg band uploads user data to GestureLogic's cloud, the company says it should grow better at warning of injuries over time.

Shoe inserts from companies such as Boogio and Scribe Labs also promise to be able to measure and offer correcting advice on runners' gaits.

Boogio cofounder Jose Torres says the company's devices, currently available for preorder, should be able to establish a baseline of healthy movements for individual athletes and warn them when they step out of that safe zone.

"You can see this is too much weight, or you're tired, or your posture is incorrect," he says.

And Scribe Labs' runScribe inserts track technique, distance, and speed over time and even help runners figure out which shoes are healthiest for them, says CEO Tim Clark.

"I actually managed to find what shoes I should be in, and they were completely not the shoes that the guys in the running shoe stores were trying to put me in," he says.

Of course, all of these device makers emphasize their products aren't medical-grade equipment; they're not meant to diagnose particular injuries, or substitute for the advice of a doctor, trainer, or physical therapist. With head injuries, in particular, that's inevitable, since there are no hard and fast rules for diagnosing a concussion.

"The medical community does not have the exact definition of what thresholds would cause injury and what thresholds would not cause injury," says Reebok's Litchfield.

Studies have shown athletes avoiding apparent concussions after impacts hundreds of times the force of gravity, while others were concussed after milder impacts, says Thomas Talavage, a Purdue University engineering professor and medical imaging expert who's studied head injuries and techniques for detecting them.

"The truth is is that there is no definable threshold beyond which you're certain or even necessarily likely to get a concussion," he says, arguing that sensors would get more accurate results monitoring the full range of impact sustained by an athlete over time.

"You almost certainly require some level of modeling of what have been the most recent exposures for a given athlete," he says.

That's a goal to which most of these device makers aspire, with many of them eager to collect user data to help scientists develop new and better models of what causes injury.

"For us, we see the data we're storing, capturing, and analyzing as something that's very valuable," says Jolt's Harvatine.

Jolt and other sensor makers also plan to update their devices and apps with code and thresholds based on ongoing medical research, he says.

"With any device that has any sort of connectivity like for ours, we can push over-the-air updates," he says. "All of these devices will be continually monitoring the latest medical research in the area."

Why Peter Thiel Thinks Social Entrepreneurship Is Broken

$
0
0

This is an interview with Peter Thiel, an outspoken entrepreneur who and cofounder of PayPal and Palantir, an early investor in Facebook, venture capitalist, and hedge fund manager. His book Zero To One was released this week.

What's wrong with social entrepreneurship?

Let me focus on the business tension. Great businesses do something that's very unique. And when something is seen as good by society, it has a very conventional feel. You have the fourth online pet food company or the 10th thin-film solar company, those are often not great businesses because there are too many people doing similar kinds of things. Once space that borders on social entrepreneurship involves all the education-related startups. I find they're often hard to differentiate; they all have a story that what they're doing is really good, but they're often similar to one another.

So as an investor, rather than look for something people already love, how do you identify an innovator?

I prefer to focus on the mission of a company. And the mission has a story that is about more than making money, some transcendent purpose. But I distinguish mission from convention; if it involves an idea that's totally different than what I've seen before, that's what feels very powerful. The creative part of the process is to think really hard: What are the great new things that we can develop today in 2014?

If you ask, is Space-X a form of social entrepreneurship? Because Elon would say that it is good for humanity to move onto another planet, and we should become a planet traveling species and go to Mars. But that's an idiosyncratic view; if you took a poll, that wouldn't be high up on peoples' list of what they think is good for society.

People have all sorts of reasons to be critical of tech. They will say that it's either too big, has too much hubris, or its too small; it's people throwing a sheep at one another on the Internet. There are a lot of critics who will not be happy no matter what. We should make technology more ambitious. I'd like to bring the definition of technology back to all these things, rather than what we have today, which is just IT.

Can't events in the world of bits radically change human behavior of people in real life?

All the IT has been extremely important for cultural causes. We have this instantaneous transmission of information; there's a degree of transparency that has changed things a lot. If you look at things like the Arab Spring, Twitter made transparent the corruption inside the governments. Once you know what's going on in the sausage making factory, it doesn't work anymore. There are places where just having knowledge changes things [and] a lot of social institutions that only work when people don't understand them. I suspect every form of injustice involves people not understanding something that's going on. When they come to understand, it forces a different response. It's one of the reasons that Facebook is so powerful, because it goes to peoples' real identities.

How do you see our concept of personal identity shifting with things like Oculus Rift?

I think it's generally still about general identity. I always think of Facebook versus MySpace. Facebook starts at Harvard; it's about putting your real self online. MySpace starts in Los Angeles, and it's all these actors who pretend to be someone other than who they are on the Internet. It's a simplification, but Facebook was about real identity, and the other ones were about fake identities. And somehow the real ones dominated. PayPal did the same thing on the level of money.

But when we're using our real identities online, don't we feel forced to moderate ourselves?

I agree that is definitely a concern. You don't want ideas to get shot down immediately. I would like people to work on more breakthrough technology companies, and then the question is, why aren't more people doing that? Is it that they just don't have ideas? Or is it just objectively hard to do it? My theory is that there are a lot of good ideas out there, but people sort of get discouraged from them, and there are all these social cues people pick up on that discourage them from pursuing ideas.

There's this very strange phenomenon in Silicon Valley where so many of the great entrepreneurs seem to be suffering from a mild form of Asperger's, or something like that, and I always want to flip that around and say: This is really a critique of of our society. What sort of society is it when, if you don't have Asperger's, you're talked out of your best ideas by the people around you before they're even fully formed?

The sociological part of it is very strange. I remember at Stanford Law, the first day of orientation, people had all these cool ideas of what they were going to be doing. All these great causes people were going to work on. And within a year and a half, it was this super homogenized [group] working in large law firms. What actually happened? My rough cut is: Things like that always happen if you have people who have no strong convictions, who are very extroverted, who are looking to what other people are doing.

There's a way in which, yes, competition does always make you better at something. But it comes at a high price. You might lose sight of what's important. It's crazy in a way.

What's the psychological impact of doing something wildly different? Is it more conducive to happiness?

I think it's definitely better, even though in practice people often experience it the other way around, where it's like... this is really weird and uncomfortable! Nobody knows what I'm doing! Even though I think that's the right thing. We think the key to happiness is to win at these conventional things, even though in practice you often get much less out of them than you'd think.


Product Bootcamp Week Two: Less Smelly Code

$
0
0

The Internet has brought online classes to the masses, but completion rates for MOOCs and other online courses is abysmal. I've discovered why: After the honeymoon phase, coding difficulty ramps up. If I wasn't in a classroom with dedicated instructors to peek over my shoulder and offer advice, I'd probably have dropped out shortly after the coding got tough. This is my second week in HFC Academy's coding and product bootcamp--and boy, am I earning these lessons.

There's something phenomenal about remembering a useful element from an old project, plucking out a single segment and dropping it into a new project. With a flourish of keys, I hold my breath for the page to reload, and it works. Of course it works--this time. Every quote and semicolon settled in right where the machine logic intended. It's a relief to have hunted through hundreds of lines of code just to see my dumb input boxes shave a few pixels for that elegant, rounded look.

"This is the turning point," says HappyFunCorp engineer and HFC Academy instructor Ricky Reusser of our progress. "Where we were worried about front-end before, now we're trying to make stuff that works." Now we begin to delve into Ruby on Rails--so why not start with the man who might know Rails best?

Michael Hartl wrote the best-selling Rails Tutorial. He stopped by HFC Academy as part of its guest speaker series while he toured in anticipation of the release of Rails Tutorial's 3rd edition. Hartl holds a PhD in theoretical physics from CalTech and got into entrepreneurship through two rounds of the famed Y Combinator incubator. Getting into Rails happened along the way--especially when he saw that there was a dearth of Rails tutorials to induct novices into the powerful web language. Hello, niche.

Part of Hartl's tutorial approach is based on his experience teaching advanced physics at CalTech. Coding isn't rocket science, but there's an overhead associated with learning anything. "There's no calculus involved in coding, but there are a lot of moving parts, and some people are overwhelmed if they don't have a coding background," says Hartl.

The teacher-cum-entrepreneur-cum-coder wanted to make his book the definitive Rails tutorial, so he released it for free. The risk worked. Everyone linked back to his tutorials and the Rails Tutorial site won its SEO battles to rise to the top of pageranks--which resulted in tangible sales and business growth. And Hartl did it all by learning Rails on the way.

"Professors do that all the time; they want to learn something, so they teach a course on it," Hartl says.

But sneakily planning a course to explore topics alongside students isn't available to everyone. Hartl says you just have to build stuff, and then hunt around StackOverflow forums for the right segment and massage it into your project. It's how a lot of programmers learn, by seeing how others play with code and then toying around. Monkey see, monkey figure out how the heck they made that animated drop-down menu.

Getting Cozy With Best Practices

You'll notice that, despite my two weeks of intensive training, this page is no fancier. One of the most important lessons I've learned follows the car maintenance rule: If you don't know exactly what it does, don't mess with it while the vehicle's in motion. Smarter folks than I have spent decades engineering solutions, giants whose shoulders I stand on to tinker my way into a wider world.

Over time, programmer culture has developed "best practices" to emphasize beneficial coding etiquette. Sure, the worship of cleaner code can reach an elitist zeal, but it's grounded in pragmatism. Keep your code clean (and as short as possible) not just for readability, but because fewer moving parts mean fewer parts that can fail. Programmers notice and shame patchwork fixes for curing the symptom (not the cause) instead of addressing larger, future problems with your codebase.

Saying something has "code smell" is a coder pejorative, e.g., "that page or code segment has code smell" because it's done inelegantly, like using negative margins instead of finding a CSS solution. Programmers label code with "smell" out of aesthetic discomfort with "hack" solutions: On the surface, the product looks as it should, but staring at the code superstructure unnerves programmers.

"Sure, it's done. But are you really done?" says HappyFunCorp cofounder and HFC Academy instructor Will Schenk. Sure, your code works on this machine with that browser, but what happens when you look at it on an obscure Android device? Who knows how different hardware and software will change how your page operates. It's like using high fructose corn syrup instead of agave, or more appropriately, a hundred different-sized toothpicks instead of a framework of wood and brackets. The load borne by your code will hold for now, but who knows what earthquake will come tomorrow?

Best Practice includes coding for screenreaders, programs used by the visually impaired to read aloud page content. There's a big emphasis on coding for screenreaders in the Codecademy and HTML Dog tutorials I've run across, so that must mean a lot of folks put in the extra effort to code for it, right?

"Not enough" says Schenk. "Even though it would just take them five seconds more to code."

Clients, Users, And You

"It's amazing the psychological impact a clean user interface has," says Aaron Brocken, HFC engineer and HFC Academy program director. "Nine times out of 10 when you show users a clunky UI with dynamic data and a clean UI with static data, users prefer the latter because the clunky UI 'looks' broken."

Users aren't savvy, they're unconscious aesthetes. They don't care how much time you've dumped into developing a product. They only care that it works. So clients asking you to build websites understandably desire perfection--but that can lead to tunnel-vision myopia that blinds clients to greater perspective.

It's easy for clients to get lost in, "Ooh, I really want this to be perfect." On a lark, Reusser built a simple bean counter program: Enter your hourly salary and it counts up--in other words, a visualization of how much time and money you're spending making sure a feature is perfect. How much value are you actually adding?

Thus HFC finds itself asking clients over and over: What problems are you trying to solve? Are the things you're working on right now actually solving the problems? Also, do you know your competitors?

"You'd be amazed how many people don't know," Brocken says. "Then they say, 'Yeah, but mine's different.' How different? You keep having to answer those questions as you develop the product."

And that means tough questioning and tough love.

"People are working on something that they're hoping will provide them income, so we have to be honest with them," Brocken says. "People use Facebook as a benchmark--what it was at first and what it is now? They say, 'Well I gotta make sure my product can handle millions of users!' You don't have three users. Get those first, and then iterate."

Obviously, that leads to a lot of push-and-pull between you and the client--not just by guiding each iteration toward a more useful product, but by keeping the client on the same page of priorities. Spending your time and their money productively, even if that means disagreeing with the client about what to improve now.

"So we'll say to the client, 'We've been spending lots of time on this, we can move on to XYZ but we really think sticking with the first thing is solving the problem,' Brocken says. "There may be tension, but it's more 'exploratory and exciting' tension than 'Oh God We Need To Get This Done'."

Today in Tabs: Puff, Puff, Tabs

$
0
0

I think the entire internet is high today. First, libertarian waxwork Peter Thiel said that Twitter management smokes too much weed, which ok that's fair, but this is coming from the dude who thinks seasteading is a good idea when he's sober. A pro-marijuana group in Denver put up billboards that look like a regretful Maureen Dowd trying to write her way through her legendary hotel room freakout. Washington, D.C. opinion-whore Lanny Davis wrote a defense of NFL Commissioner Roger Goodell that, according to CNN, no one is even paying him for? Dude is definitely baked. It goes without saying that everyone involved in the Occupy Wall St. Twitter account lawsuit is high. If you're too high to even dress yourself, now you can wear The Suitsy, a onesie that looks like a suit. Waka Flocka Flame's job is basically to be high, so hiring a blunt roller for $50,000 is more like bizdev than anything else, but still. Even this cat is high. Everyone just needs to lie down and eat some cookies and chill out for like eight or nine hours.


I will never get this high again

The Mt. San Antonio College Journalism School is shutting down its print newspaper because "no one at the school ever seemed to read it or even care that it existed." Instead they're moving all their work onto Medium so that no one on the whole Internet can read it or care that it exists. If you've got what it takes to have your hard work ignored by billions of internet users, Matter is offering a $10,000 International Reporting Fellowship. Matter is like a special section of Medium where everything isn't total garb.

Semi-Tabbed Life:Semi-Charmed All Star is just the latest proof that Semi-Charmed Lifeis allsongs.

Servicey: How to delete the blood sweat and tears of some Irish guys. How to poop. How to quit a program in Windows. How to make $1.3 million a year on Youtube (when you're 8). How to take a good selfie. How to name things. How to spend money. How to be a #teen. How to wear a sweater. How to make a man behave properly.

In The Times, Butts.

Elon Green wrote about the mystery of an infamous picture of Ronald Reagan for the Awl, and made the process of not really learning much just as interesting to read as any answer could have been.

Inspired by XOXO, occasional guest-tabber Tim Malywrote about who gets to be a "maker" and it's genuinely troubling but worth a read.

Today's Intern Tab: What's the difference between a Random Act Of Journalism and a Drive-By Thinkpiece? Trick question, they're both gratuitous ploys for your clicks and, presumably, for your attention. Today's Intern Tab™ somehow manages to be worse than both: James Franco is publishing a new book next week, and TIME dot com has an EXCLUSIVE EXCERPT. Spoiler: It's really fucking bad.

But then it changed, once again, when I arrived
Because I waz the electricity that shocked dem
Into place, you see how that happened?
They was hot young things with skillz of sex
That I brought to the fore, and galvanized.

"Skillz of sex," ladies and gentlemen. Skillz of sex.

Ok, thanks Bijan. That was unbearably terrible. You're fired forever for making me read James Franco's alleged "words."

Today's Inspirational Loudgif:Crank this bad boy up and get some!!

Today's Song: Glass Animals, "Black Mambo" (this whole album is great)

~It's a very, very tab world... tab world~

Today in Tabs is in your email and on your Fast Company Labs

NVIDIA Takes On Apollo 11 Moon Landing Deniers--With Technology

$
0
0

In 2002, Buzz Aldrin, the second man to walk on the moon, punched Bart Sibrel in the face. Why? According to a Gallup poll from 1999, some 6% of Americans still believed then that the government faked the Apollo 11 moon landing in 1969--and Sibrel is one of the more vocal among them. If you search YouTube for moon-landing conspiracy videos today, it's apparent there are still people like him who believe Aldrin and Neil Armstrong never left planet Earth.

For all those Coast To Coast AM listeners who think the government staged the moon landing, graphics card maker NVIDIA wants to set the record straight. On Thursday, the Santa Clara, California, company launched two new graphics cards, the GeForce GTX 980 and GTX 970. To show off their ability to render real-time dynamic lighting (what it calls Voxel Global Illumination), NVIDIA used this technology to re-create a model of that historic landing and debunk three prominent conspiracy theories around it.

"We talked to a lot of experts in the field to re-create what happened on the moon that day," GeForce general manager Scott Herkelman told Fast Company. "We re-created perfectly what they made and how the reflection would look off the suits, duct tape, aluminum foil."

In the above video from NASA, Aldrin descends from Apollo 11 around the 20-second mark. Conspiracy theorists often point out three problems with the footage:

  1. Given the position of the sun behind Apollo 11, why can we see details that otherwise would be obscured by shadow?
  2. Why can't we see any stars in the background?
  3. What's the strange bright light seen between the ladder and the vehicle?

Debunking Theory No. 1

Fine silt on the moon's surface reflects light onto objects that would otherwise be obscured by shadow.Image: NVIDIA

The conspiracy theorists who question why the footage isn't obscured by darkness don't take into account how light from the sun interacts with the moon's surface. The fine lunar dust covering the moon has mirror-like properties, reflecting the sun and illuminating objects on the surface. "The sun is hitting the dust, and it's illuminating the backside of the Apollo module and astronaut," says Herkelman.

Debunking Theory No. 2

We can see the Earth, but not the stars because of the camera's aperture.Image: NVIDIA

According to experts, some 84,000 stars would've been visible from the moon that evening, so why can't we see any when looking at historic videos and photos? "This one is easier than the other two," says Herkelman. "When they went to take the picture, the camera they used had a closed aperture." Doing so allowed viewers back home to see the astronauts and spacecraft clearly. In a simulation, Herkelman opened up the aperture, which gave a clear view of the stars but produced a blown-out image of the moon and the objects on its surface.

Debunking Theory No. 3

Is that brightness a studio light or Buzz Aldrin?Image: NVIDIA

Is that seemingly out-of-place light source a studio light that some amateur forgot to turn off? "We couldn't quite figure out what's going on, but then we remembered we needed to place Neil Armstrong. The second we put Neil Armstrong there, we figured out the light source," says Herkelman. Furthermore, the astronauts wore ultra-reflective space suits to keep them cool inside. When Herkelman changed the perspective of the model, he is able to confirm Armstrong's position.

The view from another angle.Image: NVIDIA

This new information is unlikely to change the minds of die-hard moon landing deniers, but you've got to hand it to NVIDIA for coming up with a pretty neat way to show off the power of its graphics cards.

Big Data's Next Challenge: Heart Failure

$
0
0

Nearly 6 million Americans are afflicted with congestive heart failure, which is when the heart doesn't pump enough blood and oxygen resulting in serious problems in other parts of the body. More than half of people who develop the condition pass away within five years of their diagnosis.

But because congestive heart failure (CHF) has certain characteristics that lends itself to big data analysis, it's become a moneymaker for tech companies--who are also saving lives in the process.

Heart failure is usually caused by pre-existing conditions, and figuring out what those conditions are can save a lot of lives. But by the time patients are in the hospital, complications related to the disease can make it difficult to parse the cause from effect--at least for humans. Smart medical devices recording thousands of data points per second, and digitized medical records which build network connection maps of all sorts of health care factors, are being deployed to unravel the mystery.

Building Smart Hospitals

Atlanta's Emory University Hospital is a busy, high-profile medical center whose intensive care unit sees considerable traffic. The hospital has partnered with IBM and Excel Medical Electronics for an ambitious project: Smart medical equipment which records between 1,000-2,000 data points from each patient per second, multiplied by 100 patients.

Dr. Tim Buchman, the hospital's director of critical care, is a tech-savvy physician who works with vendors like these to test out potentially lifesaving technologies. Excel Medical's bedside monitors are plugged into an IBM analytics platform which parses the data as it comes in, and--hopefully--finds patterns which predict CHF, sepsis, or pneumonia before they happen.

In an interview with Co.Labs, Buchman compared the experimental analytics system to a GPS for care providers. Although he explained that no analytics system could ever replace "a well-trained critical care nurse," they can help medical professionals make better decisions in high-stress situations, and anticipate changes to the patient's health.

"If you speak to a critical care physician and ask how many decisions you make on a patient in a given day, it could easily be 30," Buchman said. "You multiply that by 20 patients in the ICU and you'll see that we make 600 decisions daily, all of which are based on situational awareness and a great grasp of information. Currently all of that is based on making sure you didn't miss the right data elements and remembering what happened five minutes ago. This is the problem that we're taking on. It is a big data problem, but even more important it's a data in motion problem."

One example of the prototype system's effectiveness can be seen in patients with a common heart disorder called atrial fibrillation. These patients often show no outward symptoms, but the disorder is often associated with congestive heart failures. Clinicians at Emory have, in several cases, used a research-grade analytics system to view real-time digital visualizations of patients' heart rhythms to spot atrial fibrillation in extremely early stages.

Learning More About Diseases

One of the biggest areas for growth at the Advisory Board, a health care technology and consulting firm that works with most major American hospitals, is congestive heart failure and other conditions with a variety of causes such as diabetes and pneumonia. Because insurers, health care providers, and others have an infinitely cluttered ecosystem where different platforms are frequently incompatible, the Advisory Board can make considerable money helping organizations figure out the best analytics path.

These analytics tools are used for various purposes; one example the company cites is algorithms that stack patients to show their likelihood of being readmitted.

Earlier this year, Fast Company had the opportunity to speak with David Chao, the Advisory Board's CTO. Chao joined the company through their acquisition of Crimson, a health analytics firm, in 2008. A physics PhD-turned-data scientist, Chao describes his job as helping to "connect the dots and ask questions like what physicians are doing in terms of medication lab tests, X-rays, or imaging that they shouldn't be doing, or what they should be doing that other physicians don't do."

A big part of that role is helping to build analytics platforms that can monitor congestive heart failure. Chao, using a hypothetical example, said that "given the details of each individual patient I work with--their socioeconomic data, where they live, access to transportation, all of this works into ensuring coverage for them. If you're discharged following congestive heart failure, how do you ensure they get the follow-up care they need? This requires care coordination, which is where big data and predictive analytics comes in. It takes into account what a patient did or didn't do in the past, the details of their zip code, and that can help predict what a patient needs for care."

The health care system today has so many different data silos, and messy or incomplete data is such an omnipresent worry, that the health care analytics solutions that Advisory Board and IBM are working on are still in their infancy. If deployed correctly, analytics could save lives and money, and place health resources where they're most needed. But that's not going to happen overnight.

Three Reasons Apple Pay Is Going To Work

$
0
0

A key feature of Apple's iPhone 6 is the NFC chip which enables a new service called Apple Pay. It's a service that many consumers have hoped for for years: A way to pay for most anything using your phone instead of a credit card, and using the Touch ID feature for online payments. A robust mobile payment solution could also make electronic payments more secure. It's an interactive payment system that can take into account your location, respond to your behavior, and know where you've purchased items in the past. "Rather than having to wait until you receive your credit card bill at the end of the month, to see if you've amassed any reward points, a lot of this can shift to real time," says says Brett King, CEO of Moven and a best-selling financial services analyst. "It's about reducing friction around day-to-day interactions with the financial system."

But it's a service that has been tried by others before. Can Apple succeed where Google Wallet and PayPal have not?

We spoke to two of Apple's partners who believe Apple has got mobile payments right and to find out what it's like working with the most secretive company in the world to build the digital wallet of tomorrow.

Apple's Timing Is Right

Tangible evidence that Apple had been paying close attention to mobile payments first came in 2013 when the iPhone 5s shipped with a fingerprint authentication sensor built into the Home button. "The mobile payments area in general is one that we've been intrigued with," Tim Cook said during Apple's Q1 earnings call this year. "It was one of the thoughts behind Touch ID."

If done well, mobile pay has the opportunity to be a new, best first line of defense against escalating levels of credit card fraud, which now surpasses $15 billion a year globally. Credit card fraud, which includes debit cards, has been a particular challenge in the United States. Though the country only handles 25% of the world's card transactions, it accounts for 51% of the fraud. The reason for this primarily comes down to the fact that the U.S. has previously failed to adopt the EMV chip and pin technology that the rest of the developed world has been using for a decade.

Apple Pay is even more secure than your traditional EMV chip and pin technology cards. With Apple Pay, a user's card information gets stored in an electronic wallet simply by taking a picture of it using an iPhone. That picture captures the 16-digit personal count number (PAN) on the front of the card, along with the expiration date.

Apple Pay then applies an algorithm that "tokenizes" the PAN. Using your fingerprint to confirm your identity, the token is passed from your iPhone to a retailer or website, thus completing the transaction. The retailer or website never sees your PAN number, and if the token representing it is intercepted, it is totally useless. When you consider that the majority of credit card thefts occur because a thief steals those 16 digits, it's then evident how much a system like Apple Pay could help thwart the $15 billion worth of fraud that happens annually.

Apple's unique use of biometric identification and PAN tokenization makes Apple Pay "between five and six times" more secure than existing card payment systems, according to Barry McCarthy, president of Financial Services at FirstData. And he should know because not only is FirstData the largest payment processor in the world, it was also among the first companies Apple partnered with for its mobile payment platform.

Apple Dictated The Terms

"Our involvement with Apple started with a conversation explaining the role that we play in the payments ecosystem," McCarthy says, rather humbly considering that whenever anyone anywhere uses a credit or debit card, chances are that FirstData has had a role in that transaction. "Upon understanding what we did, Apple invited us to participate with them under very strict non-disclosure agreements."

In order to launch a service like Apple Pay, many have speculated that Apple had to have been working on the service for years. Whether that's the case from a project management and software development perspective isn't known, but it appears that Apple began seeking out its partners much more recently. When asked how far back the deal with Apple began, McCarthy says that FirstData has been an "Apple collaborator for many, many months already."

A spokesperson for JPMorgan Chase bank, one of Apple Pay's most prominent partners, confirmed that the process started even earlier: "We've worked with Apple since the summer of 2013 on this announcement and we have a long-standing history with them." Of the 400-plus employees who worked on the highly secretive project, only about 100 knew the mobile payments partner was Apple.

Apple is famous for dictating strict terms while setting up sales in the iTunes Music Store, and the company behaved in a similar manner while establishing Apple Pay: insisting that all its partners (including major banks like JPMorgan Chase) use code names to mask Apple's involvement. Meetings took place in windowless office rooms.

"Our first codename was Yosemite," McCarthy says. "Later on when we found out that was also the name Apple had selected for its new OS, we changed it to Project Acadia, after another U.S. national park. We weren't allowed to use or even say the name of the technology company we were working with--which was of course Apple."

Nonetheless he was impressed by the company, which he describes as being full of "incredibly bright individuals who understand mobile payments [more] than any other folks from other industries I've met before. What they really understood was what users would want from this, and how they'd like to interact with a payments system. They had a really clear vision of what they wanted to achieve."

Apple Didn't Reinvent The Technology

Google Wallet launched in 2011 to great fanfare; Google payments VP Osama Bedier calling it "one of the biggest investments" the company had ever made. On paper, Google Wallet had several advantages working in its favor. It was first for one thing, and it let users pay with any credit card they wanted. Furthermore, Google's data-driven approach to mobile payments opened up new possibilities for personalizing the payment process. But the project faltered nonetheless amidst poor execution and other competing mobile payments systems, which meant that it never really got its foot in the door.

Apple, for its part, has proven more successful at entering new business areas in a way that has proven financially viable--although even Apple doesn't have a flawless record when it comes to new markets. For instance, it has singularly failed to claw back the e-book industry from Amazon--not helped by a lawsuit targeted at the company for its pricing model.

So why can it succeed this time?

Apple decided not to reinvent the technological wheel, and is instead focusing on the user experience. "Everyone was waiting to see what Apple was going to do in terms of providing us with an industry standard," says King, CEO of Moven. But that standard isn't a new type of technology. Indeed, Apple Pay works via NFC--a communications technology that is over a decade old. Instead, says King, "The genius of what Apple is doing in this scenario is that they are innovating on the consumer experience of payments. They are not innovating in terms of the core [technology or] of how payments are processed behind the scenes. It's all about how the consumer initiates and experience the payment process."

"I was in a taxi this morning in New York, and I paid using my iPhone 6," continues FirstData's McCarthy, who was sent one of Apple's next generation handsets following Tim Cook's announcement, with the express aim of testing Apple Pay. "It's so much easier than reaching into my pocket to get my wallet, and then fumbling around to get the cash out. I was already using my phone to check my emails. I simply tapped the NFC reader in the back of the taxi and paid like that."

Then there's another major point working in Apple's favor that relates to a legally stipulated transition to EMV "chip cards" in 2015 which are set to finally begin replacing the magnetic stripe cards that have been used in the U.S. for decades. Although magnetic stripe cards are far from secure (they've been all but replaced in the majority of other developed countries) up until now the argument against replacing them came down to the idea that it's cheaper to pay off the fraud committed each year than it is to replace every point-of-sale terminal in the U.S.

From next October, however, a policy shift will mean that the liability will lie with merchants, rather than banks, if a fraudulent purchase is initiated and they do not have an EMV terminal. That will be a major incentive for merchants to upgrade their point of sale terminals. Companies like FirstData are additionally making it easy to add NFC on top of EMV--driving merchant adoption in a way it never has been done before.

"The time is right for this change to take place," says McCarthy.

If Apple Pay is the mobile payments solution we've all been waiting for, could this mean the end of plastic credit and debit cards? Not according to King--at least not anytime soon.

"While I can definitely imagine a world that is completely cardless, I think in the short term what Apple is doing is accelerating the number of payments that will be made electronically versus those with payment forms--talking about cash or checks. The number of electronic transactions is going to greatly increase."

Viewing all 36575 articles
Browse latest View live


Latest Images