Quantcast
Channel: Co.Labs
Viewing all 36575 articles
Browse latest View live

LinkedIn's Data Guru's Second Act At Salesforce

0
0

Last month, Salesforce did something they're known for: They bought a company. But RelateIQ, which fetched the tidy sum of $390 million, is no normal acquisition.

RelateIQ's core product uses machine learning techniques to automatically scrape and extrapolate information from customers' calendars, address books, phone activity, and inboxes in real time. By purchasing RelateIQ, Salesforce didn't just purchase a powerful commerce tool: They also acquired a potential future rival.

Last year, RelateIQ hired DJ Patil, one of the world's best known data scientists, as their vice president of product. At his prior life at LinkedIn, he served as the company's chief data scientist, head of analytics, and even chief security officer. Patil played a prominent role at high-profile failed startup Color Labs, and his resume even includes stints as a strategic advisor for the Defense Department and the Department of Energy. Along with Cloudera's Jeff Hammerbacher, Patil even coined the popular job title "data scientist." RelateIQ CEO Steve Loughlin is working with Patil on one of data science's holy commercial grails: Turning the anarchy of the email inbox and the telephone call log into a revenue-generating product for enterprise customers.

Fast Company spoke with Patil earlier this year, prior to the Salesforce acquisition. Patil told us that "We're focused on the whole idea of how you manage relationships as being fundamentally broken. How do you manage a relationship when someone emails you, how long has it been since they last emailed you? We're building a relationship management tool, with the whole idea of building intelligence around relationships and, of course, the data science that goes into it." In real-life terms, this means RelateIQ is trying to drastically cut data entry time by automatically filling forms and records with automatically populated information--which clients are charged a steep fee for the service of.

RelateIQ, for its part, seems to be on track to change how large, messy data sets like inboxes are used inside of traditional companies. VentureBeat reports that Salesforce is believed to be building a new research department around RelateIQ that would fulfill a role similar to that of Google X within Google. The company's current core product is similar to an expanded version of LinkedIn and Google's social analysis tools; among other things, they identify "warm connections" within a user's extended social network to cold call/email, automatically ping users concerning unanswered emails from important contacts, and automatically transcribe every interaction with a customer.

Patil feels a big part of the current demand for data scientists is because more organizations realize their value and how they can transform businesses. "People became interested because of the 2012 Obama campaign, with Nate Silver taking on Karl Rove and even the rise of fantasy sports leagues. Then, tertiary, is the quantified-self movement. People are less intimidated by working with numbers or algorithms when it's involved with their daily life or sports. Look at Mint.com, people thought math was a horrible thing but now they see data on how they're spending or saving. Just look at LinkedIn's profiles or Klout--they're all analytics."

Although the RelateIQ acquisition isn't one of the largest, money-wise, in Salesforce's history, the purchase augments Salesforce's data science arsenal and means competitors like Oracle will have to bulk up their own staff of data scientists working on relationship-management projects. If RelateIQ fulfills its promises of stopping team members from accidentally emailing the same queries to clients or customers, or allows salespeople to decipher the corporate hierarchies of their potential contacts through algorithms, there could be some lasting changes to the world of sales.


EBay Is Running Its Own Sociology Experiments

0
0

For e-commerce giants like Amazon and eBay, personalization is the name of the game. We live in an age where Internet pages are increasingly customized to individual users, all in the name of maximizing potential advertising or product revenue. EBay has been one of the companies at the forefront of this practice; back in late 2012, eBay launched a major redesign centered around Pinterest-like feeds. These feeds, which push content based on eBay search histories and browsing habits, now dominate eBay's homepage.

Behind eBay's customized homepage, app content, and landing pages lies the larger tale of a company transitioning from a traditional auction site to a middle person for brick-and-mortar companies in the digital world. This requires a staff of researchers--primarily data scientists and machine learning researchers, but also from the social science sphere--who can wed quantitative and qualitative research traditionally found in academia to the world of e-commerce. Elizabeth Churchill, eBay's director of human-computer interaction and a veteran of Yahoo and Xerox PARC, has a unique mandate: Getting data scientists inside the heads of different kinds of eBay customers.

Churchill, whose academic background is in experimental psychology and knowledge-based systems, supervises a staff of three researchers and six interns. The interns come from PhD programs in STEM disciplines, ethnography, and communications. A large part of her team's role is understanding different varieties of customers who use the service--and wedding eBay's internal data to sociological work to figure out how to tweak the service's appearance and behavior for different users.

"One of things we have is different forms in data," Churchill told Co.Labs. "Not just behavior data, but transaction data, a lot of data from interviews, surveys, and ethnographic work. We really do a lot of 'experience mining' to look at what the data doesn't tell us, so we can find the questions we want answered. We drive ethnographic process by looking at data that exists in scale to sample the right people to talk to to find people to speak about what they do off eBay in their general life experiences, as well as what's on eBay."

In real-life terms, that means Churchill and her team research specific subgroups of site users--ranging from new eBay users to purchasers of vintage clothing to purchasers of low-cost bulk items to different kinds of resellers--to find trends in the items they purchase or the way they navigate the site. That information is combined with ethnographic research to help eBay's team tweak the site experience users have.

By email, Churchill added that "We use data science techniques to classify activity types, use ethnographic research to dig deeper into the motivations behind these behaviors and to classify user types beyond the classic marketing categories, develop behavioral 'traits' that correspond to different shopping orientations and activities, and use our eBay data in the small and large to more deeply investigate onsite activities and develop predictive models."

This means more than just the items that show up on the homepage or what auctions are most prominently featured in the mobile app. Demographic and site use data about eBay users is used, Churchill says, not for homepage design but for notifications. The emails users receive from eBay are shaped considerably by demographic information. "Demographic data is used most effectively for notifications and marketing campaigns, rather than algorithmic recommendations," she added. A big part of this is using data about a user to figure out the sweet spot that will get them to visit eBay more often without annoying them.

For eBay, which faces competition from Etsy, Amazon, Target, and a host of smaller competitors, combining data science with the social sciences makes sense. Analytics help them understand user behavior, and anthropological and sociological fieldwork allows them to leverage behavior their competitors might not perceive.

It's an unorthodox job: In our conversation, Churchill explained that eBay is able to parse data points relating to purchasing of vintage band T-shirts by Japanese consumers (which is apparently a very valuable market segment) and identify blips like Sears' flying jackets that are popular with small market segments. If a particular band's T-shirts sell more among consumers in a specific geographic area, that data is then leveraged for the future.

Churchill added that, for her team, empathy being able to place themselves in the shoes of users who use eBay in different ways is the most important aspect. "I build multidisciplinary groups because understanding users' emotional journeys means a mix of computer scientists, front end developers, game designers who look at gamification elements, and social scientists for ethnographic fieldwork." In the world of commerce, data science needs all the data points it can garner to be useful. For researchers, this means embracing the social sciences as well.

Inside NYC's Bold Plan To Turn Payphones Into Wi-Fi Hotspots

0
0

The contracts for New York City payphones expire on October 15, with no plans to replace them. So what can a big city do with thousands of payphone terminals?

The answer might surprise you. Wi-Fi hotspots. Solar panel charging stations. Augmented reality kiosks. This stuff isn't fantasy, but building out a new network of "smart" terminals is an enormous risk. New Yorkers haven't cared about payphone boxes for at least a decade, and building one system for a diverse population is a recipe for mediocrity.

Can New York pioneer the smart payphone kiosk, or is this just a misguided tragedy of the commons waiting to happen?

We're about to find out. After a period of open submissions, NYC has received a number of proposals from companies vying for the chance to re-create the iconic payphone network. Control Group, the folks who built the interactive subway maps (and our neighbors here in downtown Manhattan) have set the bar high for public interactivity. But speculation that tech giants like Google and Samsung may be in the running will push expectations even higher.

Cell Phones Versus Hurricane Sandy

With the bulk of New Yorkers owning and using smartphones, payphones may seem a bit antiquated, but they do still serve a purpose in a pinch.

Councilmember Ben Kallos, who represents Manhattan's Upper East Side and has a background in software development, says that the first priority is "making sure that phone booths remain," rather than uprooting them entirely as might be tempting in an era of ubiquitous cell phones.

During Hurricane Sandy, which devastated low-lying coastal areas of New York City, payphones became a lifeline for residents in need of help. With cell phone networks out of commission, payphones, with their old-fashioned copper wire infrastructure, were often the only way residents in distress could call for help or communicate with loved ones.

"We have these phone booths that have become under-utilized," says Kallos. "If you walk around my district, you'll see that many of these booths don't even have phones in them. And when you're talking about a brave new world with Sandy, we need to know that everyone has copper to the home and copper to the street corner."

NYC's Department of Information Technology and Telecommunications (DoITT) is the agency tasked with finding a way to both maintain this vital city infrastructure and update it into a more 21st-century form.

"You can take these structures that already exist--and there is still a use case for them in certain dire circumstances; we saw that in the aftermath of Hurricane Sandy, for instance--but rather than just that, let's have them be something that people want to use all the time, because they look beautiful and they provide a service that people want anyway," says Nicholas Sbordone, a spokesman for DoITT.

But getting New Yorkers excited about using payphones, which most see as over-glorified advertising or a vehicle for graffiti, is not an easy task. For DoITT, it meant adopting an ideation model--adaptive reuse--that is often more associated with architecture and design than software and information technology.

"We talk about adaptive reuse internally," says Sbordone. "I love hearing about stories like that. Personally, the High Line is one of my favorite things in the world. It was able to be reinvented and reused and now everyone loves it. I'd love for something similar to happen with payphones in this city."

Designing The Payphone Of The Future

Adapting and reinventing a piece of city infrastructure that is at once essential and dated is not easy. The process of reimagining the payphones actually began a couple years ago, while Michael Bloomberg was still mayor.

DoITT's first step, Sbordone tells Co.Labs, was to simply get information from people in the know. In city government parlance, that meant issuing a request for information, or RFI.

"The idea was to open this up to folks who are not necessarily from city government," says Sbordone. "We might not be the ones with the best ideas and we're certainly not the only ones with good ideas."

The RFI was followed by the Reinvent Payphones Design Challenge in early 2013 that led to a number of futuristic renditions. Sbordone says that DoITT received over 125 submissions from around the country. (One such entry, from a current old-fashioned payphone operator, Titan, was profiled in Co.Design.)

"Having opened it up to students and designers from across the world, they came up with some really fascinating concepts," says Sbordone.

DoITT also began a pilot program for free Wi-FI from payphone booths, which ran at 27 locations in all five boroughs. According to Sbordone, it was one of DoITT's most successful programs and constantly received requests for expansion. NYC's open data set on the pilot program seems to confirm that.

"The idea was to get as broad-based feedback as possible--from the RFI, the Wi-Fi pilot, the design challenge--and bake it, if you will, into the best request for proposals that we could develop," says Sbordone.

Payphones for the Future That Plan for the Future

Beyond basic payphone functionality, prototypes submitted in the design challenge included: augmented reality interfaces, voice and gesture controlled kiosks, movement sensors, and solar panels to make the booths both environmentally friendly and natural disaster resistant.

Kallos, familiar with the ins and outs of the pace of technological improvement, tells Co.Labs that he is paying particular attention to how easily these new information hubs can be ugpraded once installed. He is wary of the city signing onto a solution that ends up being a once and done arrangement, leaving New York with another outdated system years down the line.

"I'm hoping that whoever we choose provides a living wage to employees, has a knowledge transfer provision so that the city uses open hardware and open software, is committed to serving all of our city's diverse communities, and will make sure that whatever we build here is always state-of-the-art," says Kallos.

Fitting in the City Landscape

Beyond the baseline needs for a disaster such as Hurricane Sandy, city politicians hope that the new communication hubs will enmesh themselves into the city's fabric. Councilmember James Vacca, Chair of the Technology Committee, and Councimember Kallos both expressed to me their desires for these new installations to equally serve tourists and city residents.

"I'm excited to bring Wi-Fi Internet access to various parts of our city, perhaps with sightseeing information and calendars of public events," says Vacca.

The city is effectively offering a company the opportunity to build its own franchises in exchange for the advertising rights on the installations in commercial areas. Presently, payphones in NYC are franchised out to a handful of different companies. But DoITT intends to award this new franchise exclusively to one company or consortium.

So, while the project will not cost the city any money--and will in fact guarantee the the city $17.5 million in income or 50% of advertising revenue, whichever is more--there are concerns within city government about effectively creating a monopoly on these new communication hubs.

"My main concern is that boroughs outside of Manhattan be served equally," says Vacca, who represents one of the Bronx's districts. "DoITT believes one bidder city-wide can be more effectively leveraged to provide coverage to the outer-boroughs. I understand that reasoning, but also have concerns about creating a monopoly."

The request for proposals that DoITT put out does in fact specifically encourage bidders to state how many new installations they would be willing to build in outer-boroughs and how many non-advertising installations, which would predominately be outside of Manhattan, they would be willing to operate. Sbordone says that both of those numbers will factor heavily in DoITT's ultimate decision.

Thorny political discussion aside, the potential to revamp city infrastructure with something new and possibly even exciting is not being lost on city government officials.

"New York City has an opportunity to be a global leader on this," says Kallos. "I'd like to make sure that what we build here can be replicated."

These Delivery Startups Are Racing To Build The Best ETA Engine On The Block

0
0

For services that deliver products and people across town, efficiency is everything. But the existing GPS data sources are far from perfect. To stay competitive, companies like Lyft, SpoonRocket, and Factual are creating their own algorithms to figure out how to reach you faster. These normally tight-lipped companies gave us a peek into the geo data stacks they'll be unveiling in the coming months. Here's how they work.

Road Recognition

When ride-sharing company Lyft first started, employees tried to figure out how far the driver was from the passenger by using Google's API for Google Maps. Now with more than tens of thousands of users, they've had to rack their brains to come up with a solution. "After a while, it's like, we could build a better algorithm than Google, based upon our own data," says Lyft's VP of Data Science Chris Pouliot.

Using Python, Lyft's team of 15 data scientists is working on a new algorithm that uses geo data for two functions: dispatching nearest drivers and calculating estimated times of arrival (ETAs).

From the minute passengers open the app, Lyft collects a niche set of data points multiple times a minute. The method tries to best ETA predictors like Google Maps or Waze, Pouliot says.

"The problem with that is people only have Waze open when they don't know where they're going," he says. "I'm traveling from San Francisco to Palo Alto, I already know where I'm going and I wouldn't have Waze open."

He says while drivers take passengers from San Francisco to Palo Alto, Lyft is collecting data to better understand similar trips and better predict an ETA. "We're going to know the speed that they're traveling, at different times of the day, different times of the week. From that, we think we can build a better model than Google's ETA estimates."

Instead of using road distance, data picks up different variables in the quickest route between passenger and driver by time-stamping GPS coordinates. "From that, we can figure out the speed that they're traveling, where they're going, and where they came from," he says.

He uses the analogy of Google producing translations by analyzing the coincidence of words and linguistic rules used when showing a word in both languages.

"The Google engine would say, 'Over 90% of the time when it says 'Hello', it says 'Hola' in Spanish. "We're not trying to beforehand figure out 'Well, this GPS coordinate is kind of close to being on 101' but we're taking the opposite approach, saying "Hey, it's 50 miles an hour, therefore it must be 101. That sort of thing."

Translating that into ride-sharing, Pouliot says Lyft data scientists measure data of the speed on roads on any given point on a map to differentiate the types of roads such as a highway and a side road. "If you overlay those speeds on a map, it would translate to something such as you're going 60 miles an hour, you're probably on a highway," he says. "If you were on a side road, you wouldn't be going that fast."

The wide range of variables in GPS data is what attracted Pouliot, the former Netflix director of Algorithms and Analytics, to go work at Lyft. "There's the geo-spatial element to the data--it's a really giant economics problem, trying to balance supply and demand. All problems I never experienced at Netflix."

Taking a different approach at creating their own ETA engine, rival ride-sharing company Uber took the Map Matching route, creating real-time mapping in the logistics framework to deliver flowers and mariachi bands.

Using Python and JavaScript, Uber's data science team structured their prediction framework to determine where and when the best place is to be on the road and near passengers for their most accurate ETA.

Late last year, the company also included a feature called "Share my ETA" for users to let their friends know where and when they'll arrive to their destination.

Fast Food

In the food industry, where weekly returning customers are the holy grail, SpoonRocket's model shows geo data makes them come back more than once a week.

Though it's only in two cities currently, SpoonRocket is building its own geo data model being programmed in Python and R to improve their ETA average of eight minutes.

"What we do right now would have been a logistics nightmare 10 years ago," SpoonRocket CTO Anson Tsui says. "In terms of being able to dynamically route drivers all the time, there's a lot of very advanced routing stuff in the background."

The small startup has recently brought on a full-time data scientist to figure out localized ETA down to minutes for any given meal in San Francisco and Berkeley.

"Having our own in-house, our model is so specific to our needs that it just makes more sense to build our own," Tsui says. "I think that's worth waiting."

Including past data into prediction models, Tsui says he uses

Random Forest

system as the most accurate and helpful tool for predicting food delivery ETA.

"It computes these data subsets called 'Trees' and then out of all the 'Trees' that get calculated, we pick the one we want," he says. "It takes a little bit longer to compute, but it's the best out there for this."

Because of SpoonRocket's already quick turnaround times, users come back and order more than once a week--beating the average food service benchmark.

Foundation For Map Apps

But in order for businesses big and small to get the right data, they need the right framework for the right price--which is what big data company Factual hopes to enable with GeoPulse Geotag.

This August, Factual will be rolling out a feature called Geopulse Geotag--built with Open Street Map data--which provides app developers access to the normalized interface of world-wide geography.

"It's basically a reverse geocoder, but it's more of an entity look-up that allows you to label all the digital assets created on your mobile phone," VP of product Tyler Bell says. "And then once you have that data, you can run that through your machine learning, so it is really a large-scale geographic annotation engine."

With a data stack baked down to 17 million U.S. places, Factual's vast geo data has been a center point for back-end map functionality for companies like Microsoft and Yelp.

Enabling open-source mapping like Open Street Map, Factual gives a full, core data set to hang attributes so businesses don't have to create their own databases.

With a database of about 70 million small businesses and landmarks in 50 countries, Factual uses its 1.3 million location data points for contextual information--a big people and places dataset.

Bell, a former Yahoo geo data expert, says, previously, you had to have a fat wallet to purchase any type of map data. "Over the last 10 years, there's been a growth in the creation of open source or largely open geographic datasets that don't cost an arm and leg to create maps."

Just a year ago, the thought of in-house mapping was overly ambitious, but in the next year I'd expect to see more full-time, specialized analysts, cheaper map data, less big-name proprietors, and more accurate ETAs--all quicker than you can say Moore's Law.

Bieree, The Smartphone-Controlled Beer Brewer

0
0

The key to perfecting a good home brew is monitoring and recording the temperature of the beer brewing process, according to 20-year beer-brewing veteran, Leo Estevez. But taking down and monitoring all the data is time consuming and requires you to keep a constant watch over the brewing equipment. So Estevez created Bieree, a programmable smartphone-controlled beer-brewing gadget, to do the job for him.

Estevez and his design colleague Sam Dalong revamped Bieree, which is in its final run on Kickstarter, into what it is today. Bieree's mobile app records and logs the temperature of bubbling liquids throughout the entire beer brewing process, with the kit's thermocouple. The app's program controls Bieree's pumps according to the temperature. All the data is right there on the screen, so you could monitor your beer-making from another room.

Hot water moves to and from the hot pot to the mash pot, all controlled by Bieree app.

But because Bieree's app and circuit board are programmable, you have the flexibility to add in other sensors to track. You could, for example, hook up an entire refrigerator or boiler to Bieree to track their temperature profiles at different stages of the brewing process. Or you could create an entirely new custom program that lets you manually control the pumps when you want and leave the automation to only certain parts of the process.

In a basic beer brewing setup, there are two pots. Hot water cycles back and forth between these two pots during most of the brewing. So it's key to control the pumping mechanism to regulate the near constant cycling of fluid. Bieree lets you do that with as much or as little automation as you want.

You can use any Bluetooth programming app, like nBlueTerm, to put in the commands for Bieree. For non-programmers, Bieree's app, which runs on both iOS and Android, will already come with standard controls for automatically brewing established pints: lagers, ales, and stouts.

Bieree app, still in development.

If a smartphone isn't really your thing, the team devised a handheld push button called the EZ Button that lets you manually control when the pumps move the hot water to and from each brewing pot.

The Bieree system comes with all of the electrical and mechanical components to control the setup, as well as a collapsible steamer. You provide the grains, hops, yeast, water, burner, and pots, according to the size of the beer batch that you want.

A more involved package comes with kegs and pressurization valves for brewers that want to pressurize their beer with their own CO2 pressurizers. And for experienced brewers that have their own gear, a stripped-down version comes with just the electronic components that you need to connect to the Bieree app.

Estevez and Dalong started out creating small, two-to-three gallon batches on a kitchen stove but soon increased their output by moving from five-volt to 12-volt pumps. Until now, they have been able to brew up to ten-gallon batches of beer, but by the time Bieree is ready to ship this fall, they think they can perfect a higher volume of suds.

"We are redesigning the enclosure for larger pumps, which will enable anywhere from two to 20 gallons to be brewed with our device," Estevez told us.

The three-pot setup, for larger batches. The third pot is connected to the rest of Bieree with a gravity siphon.

With the electronics finalized, Estevez and Dalong will be putting the final touches on the app and mechanical components so that Bieree can adapt to whatever is in a brewer's technical arsenal. Soon, these brewers can add programming to their toolboxes.

Five Reasons Amazon's Fire Phone Is A Comical Failure

0
0

Amazon's new Fire phone got us pretty psyched when it was first announced. But now that we've spent some time with it, the Fire phone is a nightmare in the hand. Not every facet is a disaster, but there are enough basic niceties missing that I have a hard time believing for one second that CEO Jeff Bezos uses this phone as his only device. Unless this is his first smartphone.

It does, however, manage to get one thing right. We'll save the plaudits for the end; first, here are the complaints.

"Dynamic Perspective"

Dynamic Perspective is Amazon's version of 3-D. A feature so important the company decided it was worth four front-facing cameras dedicated to tracking your face. The cameras aren't subtle and define the look of the phone without much, if any, benefit. Seeing the lock screen move with the device's rotation garners about enough enthusiasm to make you say "hmm," but little beyond that.

In fact, Dynamic Perspective isn't the only thing that feels superficial. Almost everything on the phone gets a drop shadow and jiggles around, including some text, but I couldn't see any ostensible value to all these embellishments. It's one thing to have depth in design, but the Dynamic Perspective makes everything faux 3-D for no payoff. It's not just skeuomorphic overload. It's actually sort of maddening. (This feature isn't the same as auto-scroll, which I'll address later.)

Carousel

The Carousel might make sense on a Kindle Fire tablet, but it doesn't work on a phone. For one, it's confusing. It's hard to get an overview of your phone. The layout takes up too much space. On any main view of the phone, you can only see four dock icons and one large icon with its recent activity.

This brings up the second issue, privacy. The Carousel is a privacy nightmare. By default, each item on the Carousel displays its recent activity or recommendations from Amazon that still looks like your history. Whatever the last app you've used becomes the first Carousel item. It puts things like the last 12 photos you've taken, the most visited websites, and dates and times of when you played games on display for anyone who handles the phone to see.

You can turn off Amazon recommendations from showing up, but your photos and other items will always be on display. Good luck making your phone SFW with this feature front-and-center.

Peek, Flick, And Auto-Scroll

Gestures can be great, but they need to be extremely easy to find and use. Instead of making app settings and extra information a software button, Amazon chose a flick of the entire phone to trigger special functions.

IOS has a shake gesture for "undo," which (while annoying) makes sense, because shaking the phone gets out a little of the user's frustration. The Fire phone's flick gesture is similar, but required to navigate your way through the Fire phone. Flicking feels like smacking an old CRT TV, or blowing in a Nintendo cartridge to get them to work. It's an act of frustration. Amazon totally missed this.

Funnier yet: Sometimes the motion doesn't register the first time, so even if you mean "flick," this gesture often turns out "frustrated."

Auto-scroll is another feature that uses the accelerometer to make things "easier" for the user. Tilting the device makes web pages scroll based on how the sharply phone. It's on by default and is by far the worst feature on any phone I've used. Pages just start moving as you begin to read, since you're not normally forced to keep a phone at one specific orientation to read.

Reacting to auto-scroll is when things get crazy, though. Here's an example: You click on a link, begin reading an article, and the page starts moving. That causes you to overcorrect, sending the page too far forward, then too far back. Doing this can also accidentally register the flick gesture, making miscellaneous settings pop onto the screen.

It's at this point you'll either turn off all the gestures or throw the phone at the wall.

Customization

The iPhone's limited customization looks like "anything goes" compared to the Fire phone. For example, you can't change the gray, bumpy textured, main wallpaper of the phone. You also can't change the color or anything else associated with how the software looks.

You can change the lock screen image, but that's it.

The reason a lack of customization is such a big deal is because the current graphics and visual design are just eye-bleeding bad. I'm not a designer, and I couldn't make something better. But the visual design feels so dated that it's hard to believe Amazon couldn't.

Side note: There's also no emoji. Let that sink in.

Device

The Fire phone isn't attractive, but I don't think it's particularly ugly. If anything, it's wildly plain and industrial-looking. If you can get over the looks, including the five cameras staring at your face, it's still a hard device to use.

The phone isn't comfortable to hold. The corners are sharp where the back and sides meet. The back glass is also slippery, but worst of all it gets really hot. I played a slot machine game for about eight minutes and Sonic The Hedgehog for about 12 minutes. By the end, I was constantly shifting my grip to keep my fingers on the back from sweating and getting too hot.

The One Good Thing

The good thing, if there is one, is GIFs.

At this point, the Fire phone should just be advertised as the GIF phone because nothing else garners a positive reaction. Tucked away in the settings of the camera is a feature that will let you shoot lenticular photos--a series of 11 pictures put together--which are native GIFs.

In testing it out, I was able to snap a series of pictures, hit share, and send the moving image in a text message. The file comes out as a .len format, but that's part of the GIF family and uploaded to Imgur. It also worked natively on iOS and in Google Hangouts.

Can Hackers Help Save North Korea?

0
0

North Korea has been called the world's most inhospitable place for media freedom (PDF), a place where an authoritarian government sharply restricts communications out of the country and obsessively monitors phone call activity. At a hackathon in San Francisco this past weekend, participants teamed with North Korean dissidents on a novel project: Developing tech to break North Korea's communications barricade.

The two-day event, called Hack North Korea, was organized by the New York-based Human Rights Foundation (HRF). Thor Halvorssen, the head of the Human Rights Foundation, told Co.Labs that the idea originated with Alex Lloyd of venture capital firm Accelerator Ventures. Lloyd (who had collaborated with the HRF in launching radios into North Korea before) added that the idea behind the hackathon was to build connections between the tech community of Silicon Valley and North Korean dissidents currently living in the United States and South Korea.

Approximately 35 participants broke up into eight teams over the weekend to brainstorm projects to smuggle information into North Korea. Four North Korean expatriates, including North Korean "Enemy Zero" Park Sang Hak, went from team to team with translators to offer assistance. Participants came from a wide range of backgrounds, including college students, information assurance experts, Silicon Valley startup types, and journalists who previously covered North Korea.

Courtesy of the Human Rights Foundation

The winning team, which will receive a trip to Seoul, South Korea to work with North Korean dissidents there, came up with an idea to create Raspberry Pi-based micro-radios that can pick up South Korean broadcasts and also contain pre-loaded SD cards. They also came up with the theoretical idea for iPad-sized satellite receivers, easily smuggled over the China-North Korea border, which could connect North Korean televisions via USB or coaxial tube with South Korean satellite provider Skylife. This team consisted of a former Google engineer using the pseudonym Matthew Lee, and two homeschooled Korean-American 17-year-olds from Virginia named Madison and Justice Suh.

Other projects discussed in the hackathon included methods of encrypting USB drives so information contraband could not be found by North Korean authorities, using satellite modems to discreetly send short text messages, and ways of hiding radio broadcasting equipment along the China-North Korean border.

It's important to note that the event appeared by all measures to be about publicity for the place of technology in piercing North Korea's information wall rather than as an R&D lab. The hackathon had been previously promoted in venues such as The Guardian and Hacker News, representatives from the U.S. State Department appeared to be at the event as guests, and press releases were issued with information about the hackathon winners. The corollary to this, of course, is that some very interesting technology for smuggling information into North Korea is likely being created right now behind closed doors.

A lot of this information smuggling takes place using balloons. Park Sang-Hak uses balloons and GPS guides to smuggle DVDs containing Korean-language Wikipedias, propaganda booklets, and American currency over the South Korean-North Korean border. Park said at the event via a translator that "There's unceasing interest by the North Korean regime in harassing and threatening defectors and others interested in this work. Right now, it's a lonely task--people who should be interested are disinterested. But someday, 20 million North Korean citizens will realize they can have the freedom and liberty defectors in South Korea have."

Lloyd added by email that "Every defector group I met, in the simplest and most eloquent terms, explained to us that the game-changer in North Korea would be access to information. Together, we brainstormed about what types of technology might help. Some dissidents favored radio transmissions. Others favored USB sticks smuggled across the border from China. And some wanted to launch balloons guided by a simple GPS device to drop payloads of leaflets and thumb drives when they had reached a certain location inside North Korea. Why stop there? Peer to peer mesh networks, perfectly encrypted messaging services like Wickr, drones, cheap Wikireaders, and Kindles--the possibilities are limitless."

For North Korea's population of 24.76 million there are approximately 3.5 million computers and 1.5 million tablets, along with 3 million mobile phones. Most computers in the country use either older versions of Windows or a homegrown Unix variant (skinned to look remarkably like Mac OS X) called Red Star OS. For more on North Korean technology, Martyn Williams' North Korea tech blog and Chad O'Carroll's NK News offer excellent primers.

This Heads-Up Display For Your Car Has One Big Flaw

0
0

Back when the iOS App Store was in its infancy, developers were experimenting with all sorts of wacky apps. One of those was meant to turn your phone into a heads-up display (HUD) by placing it on the dash and reflecting speed and direction on the windshield.

Years later, with a nascent hardware economy happening on crowdsource platforms, Navdy is trying to bring back the heads-up-display concept. The Navdy is a dedicated piece of hardware that sits over the steering console and displays information about your drive or route onto a translucent piece of plastic--your very own version of the fighter plane cockpits that inspired the concept.

Navdy just went up for pre-order at $300 and will retail for $500. That's a lot of money for what's being called Google Glass for the car, but it wasn't so long ago that freestanding GPS units occupied the same real estate (and the same price point). The difference between old GPS units, Google Glass, or even a smart watch and Navdy, you could argue, is that the more readable interface might actually save your life. Still, even at the discounted price of $300, we're not sure how many people are going to shell out this kind of cash in the name of "safety."

The device will work with iOS 7 and Android 4.3 and above and connects to your smartphone over Bluetooth and an ad hoc Wi-Fi network. The connection to the phone allows Navdy to get turn-by-by navigation from Google Maps, and other data.

"All notifications can be sent from iOS and Android, but the user decides which ones and when they are displayed--so it's not all or nothing," says Navdy founder Doug Simpson.

Navdy also supports messaging and direct calling. Interaction with the device is done with either voice commands or gestures detected by a front-facing camera.

BMW and a few other car manufacturers have been testing native HUDs built into the vehicles for a while, but car manufacturers' UI and UX are usually horrific compared to third-party devices. It certainly makes for a cleaner look not to have a brick sitting on the dash with cables streaming down, though.

Navdy is compatible with a lot of the apps people are using in the car already like iTunes and Pandora, and more software functionality will come added with future software updates. Even if the HUD only gets bug fixes, it'll likely be a better experience than anything offered from car manufacturers.

Simpson also says that there will be a developer platform coming next year for the HUD. Though with so many other app stores and devices developers can program for, it'll be interesting to see if dedicated car hardware garners enough interest to be a viable ecosystem.


The Race To Be The Ultimate Siri Killer

0
0

If you watch TV, you've probably seen the Microsoft commercials that take potshots at Siri. Ditto from Google, pushing Google Now. But it's not just the big names of tech who are running in the digital assistant race: Plenty of smaller companies are also competing for a piece of that sweet predictive analytics pie, and one of them looks especially formidable.

Expect Labs is a small, San Francisco-based startup whose flagship product is an API called MindMeld. A combination of voice recognition and machine learning technology, MindMeld is a big bet on the Internet being navigated through automobiles and cable boxes rather than touch screens or desktops. Expect Labs sends crawlers through clients' websites which build custom knowledge graphs, provide voice recognition client libraries for developers, and then use MindMeld's API for natural language processing of user queries. In not so many words, it adds Siri-like capabilities to any website or app.

The API launched in early July; cable provider Liberty Global is on record as one of their early clients. CEO Tim Tuttle told Co.Labs that some of the company's other backers include Google Ventures, Samsung, Intel, and Spanish communications giants Telefonica.

Shortly after we spoke, Expect Labs received funding from an unexpected source: American intelligence agencies. In-Q-Tel, an investment arm of the CIA and the United States' other three-letter security agencies, entered into a strategic investment and software development agreement with the company. The terms of the In-Q-Tel investment are not known but according to the press release Expect will help "voice-enable a wide range of potential applications for use across U.S. Government agencies."

Prior to the In-Q-Tel funding round, Expect Labs is believed to have raised approximately $2.4 million. "Voice is a very useful input for new sorts of devices like set-top boxes or Apple TV," Tuttle added. "We're also looking at a lot of connected car applications. There's a huge investment in connected voice applications inside cars, and voice-driven intelligence systems for cars that accomplish tasks while driving." Other areas of interest, he added, include helping customers navigate large video libraries or product catalogs through tablets for their clients. Expect Labs feels that adding voice queries to video-on-demand and watch-through-tablet services makes finding content easier than wading through multiple menus.

Other companies are also looking to add personal assistant components to third-party products. Another Bay Area company, Speaktoit, is funded by Intel and Motorola and is positioning their API as a tool for automakers and in-car app makers. According to a Speaktoit fact sheet, the company can build "vehicle-specific dialog interfaces," and already has a dozen OEM partners in the automotive world. It's important to note that both Expect Labs and Speaktoit both offer their product in multiple languages; Spanish, Korean, and German are just as relevant to the personal assistant world as English is.

Who Really Controls The Wearable Tech Market?

0
0

Sensor technology, more than anything, determines what your $100 activity tracker can do. Today, it's counting steps. But the smart wearables of tomorrow? They depend disproportionately on companies like Raleigh-based Valencell, which develops and patents the miniature sensors that big-name brands are now using to play in the wearable biometrics space. It has 14 patents on its biometric sensor and 40 more pending.

Valencell, which received $7 million in series C financing this past June, specializes in blood-flow measurement and licenses its miniature optical sensors to several global brands, among which are Intel and LG. This year marks the first time that the company's sensors have shown up in consumer products, mainly for heart-rate measurement. LG put out the Heart Rate Monitor Earphone this year, making it one of the first commercial products on the market to use Valencell's patented sensors. Intel announced its partnership with Valencell at the International Consumer Electronics Show, solidifying its plans to come out with its own version of the heart-rate monitoring earbuds.

In total, eight third-party products will be using its sensors by Christmas, Valencell CEO Michael Dering says.

The LG Heart Rate Monitor Earphone

Smaller companies, like Iriver and the European audio brand Blaupunkt are also launching headsets this year with Valencell's biometric sensors. The company Schosche will use Valencell's tech in new armbands. Steven LeBoeuf, president and cofounder of Valencell, says there are more than 50 companies who are either in discussions with Valencell, or evaluating its technology, or experimenting with it, or integrating it into their devices.

"One of the reasons we've gotten so many new customers in the pipeline now is because a lot of these large companies--we can't name names about which ones these would be--had products that simply did not work," says LeBoeuf. "They had to look around and find technology that would make them work, and they came to Valencell."

The Competition For Sensors

In terms of technology, the first wave of activity trackers only required a simple accelerometer to record and generate a user's activity, so getting to market was not a terribly grueling endeavor.

"Who knows how many fitness and activity wearables are out there? If you look at all of these companies worldwide, there's probably 20 of these things," says Jason Krikorian, general partner at the investment firm DCM. These trackers all do pretty much the same thing--measuring steps with an accelerometer.

Now that device makers are asking for heart rate monitoring and other metrics, sensor R&D has kicked into high gear. Since Valencell isn't bogged down by manufacturing its own consumer products, it can go a step further. Its clients deal with integrating Valencell's optical sensors into their devices, which turns out creative formats, like earbuds.

How Heart Rate Sensors Work

Specifically, Valencell's sensors use optomechanical technology to measure a patient's blood flow. From the blood flow data, you can extrapolate the heart rate, respiration rate, and the blood-oxygen level of the user, in addition to a few more metrics. In this way, Valencell's tech can span both the heart rate and activity monitoring domains.

Through its research and development efforts, Valencell was able to correct issues that pulse oximetry couldn't get around before. Its technology can obtain good readings from people with different skin tones and correct those signals when sunlight shines on the sensors. Part of Valencell's secret sauce involves shining multiple wavelengths from the sensors' emitters and then using a simple averaging and subtraction algorithm on the measured signals.

The most important hurdle for these sensors is the "heavy user" test--in this case gym rats. Before Valencell's technical improvements, traditional pulse oximetry sensors only worked well on patients who lay still in a hospital bed. Many of those heart rate monitors on the market had trouble picking up measurements for people moving more than three miles per hour, or lifting weights, like Intel's Basis Watch or the Mio line of products.

How To Hack A Sonos Sound System To Terrify Your Friends

0
0

In the dead of night, you awake to the sound of a creaking door. Upon finally falling back asleep, a dish smashes to the floor downstairs. Over the next few nights, it gets worse: First footsteps, then a wailing child in the distance. As rational as you consider yourself, it feels inescapable: This house is haunted. Before you freak out, you might want to check your Sonos app.

That's because you've probably fallen victim to Ghosty, an inventive Sonos hack created by a developer named Aaron Gotwalt. Using an unofficial Sonos API, some spooky audio files, and a Raspberry Pi, Gotwalt built a system that allows you subtly take control of a Sonos system and freak people out with sounds that are straight out of a haunted mansion.

"Sonos and many other devices like it--things that live in our homes, connected to our network connections--have no effective security inside the network they're on," says Gotwalt. "As a result, it's quite easy to manipulate these devices for home automation purposes."

Some home automation products already put wireless speaker systems to work in useful and interesting ways. But if you ever wanted to tap into one's Sonos speakers for the sole purpose of gradually driving people out of their minds for your own entertainment, you were out of luck. Until now.

Sonos doesn't have an officially documented API, so Gotwalt--who contributes to the Sonos Ruby gem--helped craft an unofficial one.

"By observing and reverse-engineering the way that their official client software interacts with the devices we've managed to reconstruct a significant feature set," he says.

For Gotwalt, Ghosty is a proof of concept. With it, he hopes to demonstrate that the same unofficial APIs used for more useful purposes can be used for "unexpected things," as he puts it. In this case, horrifying anyone with a Sonos system.

Gotwalt's code runs off of a tiny Raspberry Pi, which can be surreptitiously connected to the same network as the Sonos speakers. Once it's running on the network, the creepy little app will randomly select speakers--which are typically situated in different rooms--lower the volume, and play haunted house sound effects at frustratingly random (and thus unpredictable) intervals. The only way to detect what's happening, Gotwalt explains, is by looking at the Sonos Controller app while the sounds are playing. Otherwise, the whole thing is pretty much invisible to the victim.

"This was originally built to prank some coworkers who have tons of Sonos products," says Gotwalt. "I think it also asks interesting questions about the security of network-connected devices in our homes while being a benign little hack."

Despite the lack of an official API, this isn't the first time somebody has hacked Sonos. This Twilio-Sonos integration is just one example. A GitHub search for "Sonos" turns up many more. As tempting as it may be to try, Gotwalt admits, Ghosty is not exactly plug-and-play. As of now, it's probably best suited for engineering types. More adventurous tinkerers may want to read the documentation closely and consider reaching out for extra help.

Either way, the project offers an inspirational hint at what's possible in the era of connected, hackable devices.

How To Use Data Science In Your Publicity Campaign

0
0

Most advertising is all about statistics. But when it comes to getting press, most companies still use traditional "PR," personal relationships with journalists, cold calls, and email press releases.

The founders of Spokepoint set out to move beyond that, building a PR firm and public relations platform that brings the same kind of numbers-based insight that companies cultivate about their contacts with customers to their contacts with the media. Spokepoint's focused on catering to other startups, even offering packages for crowdfunding campaigns where Spokepoint takes a percentage of funds raised instead of a flat fee.

"We've done that with a bunch of customers, and it's always worked out super well," says Spokepoint cofounder Dan Siegel. "It aligns incentives really well."

Siegel and cofounder Paul Lam got the idea after successfully publicizing a previous invention of their own: the popular "Super Pac App," which applied Shazam-style sound recognition to political ads during the 2012 U.S. presidential election. The app let TV viewers turn to their smartphones to find out who was sponsoring commercials backing and bashing different candidates.

"The app hit number one in the App Store," says Siegel, and friends and acquaintances began to call for advice on publicizing their own companies and Kickstarter campaigns.

"We sat down with people, and we laid down the process, and it was stupidly simple," he says. "Track everything, was really the answer."

Many company founders who might have no qualms about poring over programming language specs or negotiating hardware specs with suppliers weren't sure how to go about reaching out to the press, he says.

"They get right up to the point of actually sending messages out, and then they get a little too scared to do it," he says. "The thing I say there is don't fear the journalist; fear the silence."

That is, startup founders should be more worried about their inventions not being publicized at all than they should fear a snarky interviewer, he says.

"It's very empowering to realize it's well within your power to vocalize and get press," Siegel says.

Spokepoint started out providing standard PR services using their own tools behind the scenes to find journalists who'd be likely to take an interest in their clients and A/B test the effectiveness of different pitches, and, as of this month, launched a platform to let clients do as much or as little of the writing and pitching as they wish.

Siegel says he advises clients to think about their concrete desires for a media campaign, just as a financial advisor would advise clients to think about earning goals and risk aversion. Some clients might prefer to pitch a few big-circulation outlets, and others might do better pitching to a wide circle of smaller publications, he says.

The most important metric to measure about a pitch, the company's found, is simply whether an initial pitch gets a positive reply, Siegel says. Knowing that, clients can target journalists interested in particular topics and try different variations on a pitch to see what gets the most interest, he says.

"You can really have an impact on the positive reply rate based on the message you're sending," he says.

From tracking results from previous clients and crawling the web for journalists' new stories and up-to-date contact information, the data that provides Spokepoint's predictive power is always getting more complete, and the company's working on pulling in other information like reporters' social media posts and LinkedIn profiles, says Siegel.

Some reporters are initially skeptical of the idea, he says, but most ultimately warm to the idea of a tool that means they get more story ideas that are actually relevant to them and their audience, he says.

"Okay, if this works, it means i'm actually going to get more targeted pitches," he says reporters think.

"Our north star is conversion," he added. "It's helping people get a positive reply from a journalist, and the way you get that is by sending journalists something they actually care about, rather than by spamming them."

Your iPhone Camera Can Work As A Microscope

0
0

A small startup in the Bay Area is working on a development which could revolutionize scientific research: Cheap, smartphone-based microscope systems which leverage the phone's camera to magnify up to 340 times.

The Catalyst Frame project, currently on Kickstarter, may not be the first attempt at making a smartphone microscope, but it's definitely among the most ambitious. Creator Jing Luo says the microscope is intended not just for hobbyists, but for field scientists--a major selling point is that it prevents contamination because the Catalyst Frame is designed not to touch samples.

According to the product's Kickstarter, the Catalyst Frame microscope is a "simple, full- featured portable microscope that works with your smartphone/tablet with powerful 30/50/170 or 30/170/340 magnification." The small add-on accessory is about the size of two cigarette lighters and requires two AA batteries to operate. By itself, the Catalyst Frame can magnify up to 170 times; using an iPhone or Android camera, it can magnify up to 340 times.

Catalyst Frame is part of a larger microindustry (sorry) of companies producing microscopes that leverage smartphones' tech backends. Other products on the market or soon to be on the market include Micro Phone Lens 150x, MicrobeScope, and MicroMax Plus. Most of these max out at 150x resolution and are aimed primarily at hobbyists and photographers; one exception is the SkyLight, a microscope adapter aimed primarily at health care providers offering telemedicine services.

Luo got the idea to build the device from a friend who is a wildlife immunologist and complained about the poor quality of field microscopes. Familiar with the common use of machine learning algorithms in modern microscopes from his background studying molecular cell biology and bioinformatics at the University of California Berkeley and UC-Davis, he then turned to Kickstarter to build his project.

As of press time, Luo has raised over $20,000 on Kickstarter. "With this tool, one could follow a rare toxic species of bacteria as it zips across the slide, identify an insect under higher magnification, then classify rock samples. My goal is to build a platform," he added. The biggest challenge for Catalyst Frame right now is design: Luo, who is self-taught like many other Kickstarter inventors, is trying to figure out ways to integrate feedback from Kickstarter backers who want a lay-on table design. Prototypes are currently being built for the microscope, which will then enter production.

Because smartphones are getting more and more powerful, there are increasing opportunities for enterprising inventors to build powerful accessories which leverage the computer technology at the heart of iPhone and Android devices. If this means field scientists can benefit from inexpensive, high-quality microscopes, that's all the better.

How Recording From The Road Is Changing The Music Industry

0
0

Algorithmic mastering tools like Landr are revolutionizing the way we make music, but there's another technological phenomenon happening in the recording studio: portability.

"Working remotely is probably a good half of my business," says recording engineer and songwriter Warren Huart. "I play keyboards on most of my stuff, but I also have a programmer I use who lives elsewhere. We just email backwards and forwards all day. He works in Logic; I work in Pro Tools."

While Huart--who worked on Aerosmith's last album Music From Another Dimension as well as the first two albums from The Fray--may just use email most of the time to trade tracks, part of the remote working boom is coming from tools that can help bridge the gap between people.

Musistic, for example, is one of those tools. The plug-in allows you to share raw tracks regardless of the recording software being used. The transfer is uncompressed and happens quickly, meaning collaboration could be happening in different locations at the same time.

"The time it took to ship hard drives or import files from Dropbox just sucked the life out of any creativity we had," says Musistic CMO Joel Halpern. "Problems with the non-compatibility of DAW software, even different versions of the same software (Pro Tools in particular), added to this downtime."

This type of audio plug-in isn't something you'd notice walking into a million-dollar recording studio, but it is a big part of the evolution happening. It also isn't exactly new, it's been happening and getting better ever since Waves introduced the first audio plug-in in 1992.

One of the people that helped spark the algorithm revolution at Waves was Shachar Gilad.

"I do think these algorithms, presets, and plug-ins really do help in two ways," says Gilad. "First, they get musicians closer to being independent. Getting you closer--even if not all the way--is important. And second, they are great learning tools. They offer insight and you can play with them in your home for as long as you want, A/Bing, choosing different presets and trying to understand what sounds good and why."

Gilad has since started SoundBetter, a services marketplace to find audio professionals. Because although the new wave of software tools are great for beginners and home enthusiasts, that's not where the recording industry stops.

Music recording is still, and always will be, a highly trained skill. Even though the next evolution for the recording studio is more access to anyone interested, the tools don't guarantee quality. That's why the professionals shouldn't be worried about an algorithm taking their job.

Access shouldn't be discounted, it allows those with a passion to gain the experience and training, but access also shouldn't be confused with skill.

"Will the algorithm help the guy with the average to not very good mix to make it sound really even, and loud, and crunchy, and all things that he wants to get? Yeah, of course it will," says recording engineer Warren Huart. "I think all the mastering engineers that are upset, all the people that are upset, have to remember that it's not about them. It's fulfilling the home recordist."

Sharing Economy

While the recording studio is being revolutionized by Internet based tools, its story is also part of the larger sharing economy trend. Technology is connecting people to inactive equipment, similar to how Airbnb is leveraging people's apartments, and that's extending into music.

FreshSessions, for example, is like Airbnb for recording artists, voice-over talent, and anyone else interested find available studio space to rent on a temporary basis. It helps studios of all sizes cover downtime and gives individuals more opportunities to use spaces possibly cost-prohibitive before.

"Companies operating in the sharing economy space are most valuable to me when their products revolve around sharing skills," says FreshSessions founder and CEO Dan Miller. "Although the barriers are increasingly being lowered to acquire knowledge and new skills, many people will not fully invest the time to do so, which is fine, because that presents an opportunity for platforms that can easily connect the two parties."

The music sharing economy is also bigger than any new app or service, these startups are finding creative ways to harness the power of individuals into cold hard cash. The Internet's promise, and why it destroyed music sales, was always in leveraging connectedness.

SparkPlug.it is tapping into that potential as well by letting people with unused instruments rent them out. Everything from traveling to wanting a specific model of guitar to record an album with, there's a million different reasons why someone might need to borrow/rent some gear for day or two. Before SparkPlug.it, there wasn't a good option for it.

"This new economy allows musicians to make some money from their resources when they aren't using them. They can offer their apartment when they're on tour, rent out their instruments and equipment when they're between albums, or carve out the times when they aren't using their rehearsal space to supplement the rent," says SparkPlug.it cofounder Jennifer Newman Sharpe.

How Open Source Helped Segment.io Grow A Healthier Company

0
0

Segment.io builds tools to help companies more easily connect their websites to analytics and advertising platforms like Google Analytics, Omniture, and Chartbeat. And after making much of their work open source, the company says those tools have improved faster than they otherwise would--thanks to talent and customers who come across the company through venues like GitHub.

"It makes a lot of our tools more productive," says cofounder Calvin French-Owen. "It encourages us to make Readmes for each one, and then just test the individual functionality of that module."

The company's founders started focusing seriously on open sourcing their product a few years ago after noticing that an early version they released was racking up stars on GitHub. That signals users were bookmarking the code and interested in working with it, says cofounder Ian Storm Taylor.

"It started to gain some stars and we thought, this is a potentially cool idea, and maybe we should investigate and see if there's something there," he says.

Since then, the company's continued to make as much of the product as possible open source, and often receives code contributions from users and analytics providers who want to see the product better integrated with their favorite analytics tools.

"It's probably split between the partners and the people who are just interested in using the services," says French-Owen.

The Segment.io platform captures data from web and mobile users and sends it to any of more than 100 analytics services, making it possible to turn on and off different analytics services from a web dashboard without having to deploy new web code or wait for a new mobile app version to publish.

To manage all of those different product integrations, Segment.io uses a traditional, Unix-style development philosophy, chaining together short, simple, open-source modules that do one thing well, say the company's founders.

Providers generally want to be integrated into Segment.io so that it's easier for existing Segment.io users to add their services, French-Owen says.

"In general, companies like us because it sort of speeds up the sales cycle," he says. "They can say, oh, you're already using Segment.io? Just turn us on and try us out."

And the company's added more than 100 repositories to GitHub, ranging from software error-logging tools to web stylesheet processors.

"They range in size from a random tiny utility that we needed one day to projects that we've been working on internally and decided to release to a bigger public," says Taylor.

Even the company's press kit, giving reporters basic info about Segment.io, is stored as a GitHub repository, and the founders say GitHub's used internally to manage wikis, trouble tickets, and other internal data.

Contributing to the open source community has also helped bring the Segment.io new customers, says marketing director Diana Smith--one project, called Metalsmith, which makes it easier for non-developers to build and edit simple websites, is in the top 10 sources of referrals leading to new customer signups, she says.

And, it turns out, being involved in open source helps bring in the kinds of employees Segment.io wants to hire, says French-Owen.

"The interesting thing: When we started, we just started open-sourcing things because we wanted to, more than because we wanted to attract a certain type of developer," he says.

But since then, he's seen that developers who fit well with Segment.io's coding style and sharing culture are likely to be open source contributors and users themselves.

And the traditional open source ideal of building small, interoperable modules proved ideal for melding multiple analytics tools, like combining Optimizely's A/B testing platform with other metrics, he says.


This Is Going To Change How You Listen To Music Forever

0
0

Dr. Lior Shamir began his experiment with a curious question: Could an algorithm "understand" the music of popular artists like the Beatles?

"The results showed that the computer clearly identified that the music of the Beatles has a continuous progression, shifting from one album to the next," Shamir says. Using nothing but patterns inherent in the music, the computer could accurately tell which album the band had written first; then second; then third. "The sorting was based on audio data alone, without any additional information about the albums," says Shamir, who is an associate professor at Lawrence Technological University near Detroit, MI.

Identifying the patterns that make songs alike is one of the Holy Grails of machine intelligence, which means Shamir's algorithm could become one of the most valuable pieces of code in the media industry. Recommender systems are the heart and soul of every music service, and media distributors will pay anything (remember the Netflix Prize?) for technology that can use their large databases to predict what else users are going to like. But when "machine listening" is this freaky accurate, what does that mean for the future of music and the people who live and breathe its creation and curation?

From "Please Please Me" To "Abbey Road"

Shamir's project began, oddly enough, when he developed an algorithm for use in classifying whale song. "By mining the data we were able to determine that, just like people, whales in different geographic locations have different accents or dialects," he says. As the project evolved, Shamir had an idea: If the algorithm worked so well for analyzing whale song, how would it fare when it came to analyzing pop music?

"Naturally, when you're looking at music, you start with the Beatles," he says. "So that's exactly what we did." He populated a database with samples of the Beatles' music, taken from each of their 13 albums. Using a set of 2,883 numerical content descriptors he was then able to break the music down into a variety of different numerical values--ranging from pitch and tempo, to other patterns we do not regularly associate with music.

Once this had been done, Shamir was able to use a the variation on the weighted K-Nearest Neighbor algorithm to determine the measure of similarity between two different songs.

"All of these experiments were done in an unsupervised fashion, which means that the computer was not guided by human intervention, but was just asked to provide the best network of similarities between the albums as the computer understood them," he continues.

The sorting of the albums into chronological order was something Shamir was not expecting. Nonetheless it happened--meaning that the algorithm was able to pick 1963's Please Please Me as the Beatles' first album, followed by With the Beatles, Beatles for Sale, A Hard Day's Night, Help, and Rubber Soul. Following this the algorithm moved on to the group's psychedelic rock albums Revolver, Sergeant Pepper's Lonely Hearts Club Band, Magical Mystery Tour, and Yellow Submarine, before finishing with The White Album, Let It Be, and Abbey Road.

Revolutionizing Discovery

After the Beatles' music had been analyzed, Shamir turned his attention to other popular music bands--such as ABBA, U2, and Queen. In each case his algorithm was again able to sort the albums into the order in which they were recorded, despite being given no information other than the music.

It made a few interesting observations along the way. For instance, during the categorization process, it appeared to have made a mistake when it listed the Beatles' twelfth and final album Let It Be before its eleventh one, Abbey Road. However as any Beatles fan knows, the majority of the songs on Let It Be were actually recorded at the start of 1969, before Abbey Road--thereby making the algorithm correct.

The algorithm also proved accurate when it came to determining which songs by another band (Tears for Fears) were played by the original band members, and which were played by their replacements.

"It has the makings of a great music discovery tool," Shamir says. "It could really help to make different artists' work more accessible to everyone. There could well be some great musicians out there that you would really like, but whose work you've never been exposed to before."

This is the business model behind a company like BookLamp, which combs through books and has proven capable to dividing them up into different genres or other subcomponents based on repeated keywords. (BookLamp, not coincidentally, was acquired by Apple for upwards of $10 million--most likely to serve as a competitor to Amazon's algorithmic Amazon X-Ray service, which allows readers to see at which points characters or terms appear in a book.)

By turning these tasks of categorization over to an algorithm, a lot of the flaws of human-based review systems like Yelp could be avoided.

The Computer Science Of Hit Songs

All of this skirts around a bigger issue we are just seeing the early stages of: algorithms being used in the creative process. While an increasing number of jobs--from legal work to medical diagnosis--are now routinely carried out by algorithms, creativity is an area that is often thought to be "safe" from automation. Not according to everyone, though.

Several years ago Jason Brown, professor of Mathematics and Statistics and Faculty of Computer Science at Dalhousie University, extracted several mathematical patterns from the music of (once again) the Beatles, and used this to record some soundalike songs by applying the information he had learned.

More recently, the Iamus project of computer scientist Francisco Vico composed more than 1 billion songs in a wide range of genres by using the power of algorithms. Architect Celestino Soddu similarly uses genetic algorithms to create endless unique and unrepeatable designs by simply entering the "rules" that define a certain type of building or style.

By looking at the patterns in songs--ranging from tempo, time signature, and harmonic simplicity--algorithms could be used to help decide whether to sign new pop acts. It could even cut out the middleman completely by taking onboard the song components of a hit and generating music by itself. For example, Lior Shamir feels that over time it should be possible for his algorithm to create new musical tracks which sound like offcuts from, say, that the Beatles' Revolver album.

"Theoretically it's certainly possible," he says. "The problem right now is the amount of computing power you would need to let the computer do that kind of composition." That doesn't mean he's content to wait for Moore's Law to catch up, though.

"One way we're trying to get around this is to create a vector representation of the music in MIDI form," he continues. "The structure of a MIDI is much easier for a computer to work with, and reduces the complexity by several orders of magnitude."

This subject often provokes strong reactions, because it's perceived as taking away a fundamentally human part of the creative process. After all, if music labels could continue generating new songs in the style of popular artists for relatively little money, why risk hiring unproven (human) acts?

Why This Doesn't Mean The End (For Humans)

But fortunately not everyone sees it this way. In fact, there are plenty of examples of algorithms being used as a part of the creative process that can help people--whether this means getting unlikely movies funded in Hollywood, or pushing artists to create bold new works.

Epagogix is an example of the former. Epagogix works with some of the biggest studios in Hollywood, using a neural network to forecast box office numbers.

"We produce a number very, very early in the development process, and this number then be taken into account when a studio is budgeting a particular film," says CEO Nick Meaney. "Based on our forecast, it might be that a certain actor is hired or not hired, for instance, because we've predicted that while the movie might be able to make a profit, it will always have a certain ceiling." In this way, studios can start off by working out how much money a film will make, and then reverse-engineering it to ensure it finds its optimal audience.

Alexis Kirke is another believer in the way humans and algorithms can sit side by side in the creative process. As permanent research fellow in music at the Interdisciplinary Centre for Computer Music Research at the U.K.'s Plymouth University, Kirke thinks pattern-spotting can only be a good thing for broadening the scope of musical exploration.

"The algorithm is not replacing us, it is enhancing us," he says. "It is not cheating to use the algorithm, because it won't take long for the most skilled writers and composers to push the algorithm beyond its original intended use, leaving the average 'cheater' creatively behind."

Inside The Bizarre Phenomenon Known As "Glitch Art"

0
0

International artists that tinker heavily with computers to create their work are called "glitch artists." They produce a type of new media art that lays out defects--glitches--in a given computer system onto a visual canvas, whether it's print, 3-D installation, or computer screen.

A new exhibit at the Ukrainian Institute of Modern Art in Chicago is celebrating their work, but why? Historically, humans have been indifferent to non-human art; none of Koko the Gorilla's drawings appear in the Louvre. So will art fans flock to glitch art? Or are these digital artifacts only a mother(board) could love?

Glitch Art's Chicago Roots

As advances in technology and the mere presence of the Internet drive progress in the digital arts, so has glitch art evolved. Entire communities have sprouted up on the web to nurture the discipline from the bottom up, like the Chicago-based 0p3nr3p0 glitch art repository. UIMA's "glitChicago" exhibit, which opened on August 1st, pays homage to Chicago's tradition of honing this subculture of electronic artists.

Chicago has been a hub for the glitch art movement for years, even before glitch art became a term. Electronic and noise music, the punk rock scene, as well as improv jazz circles, all helped influence the artistic subgenre. The spirit of sharing digital media and the network of DIY art galleries in Chicago also played a part.

Since the 1970s, the Electronic Visualization Lab at the University of Illinois at Chicago has encouraged artists and software and computer engineers to follow the same coursework and collaborate. The art and technology program at the School of the Art Institute of Chicago has been around for decades, and the school's brief generative systems program influenced a generation of digital artists in Chicago.

In the '90s, Chicago hosted important conferences in the digital media space, like the International Symposium of Electronic Art and SIGGRAPH. Glitch art really took off in the past decade, however, as all these trends primed an audience for it.

Image: Flickr user Rosa Menkman

"It wasn't until recently that glitch artists started referring to themselves as glitch artists," says Paul Hertz, curator of glitChicago. Different techniques of manipulating digital media began to find traction within Internet communities, and artists in Chicago picked up on them.

Between 2010 and 2012 the Chicago-based glitch artist Nick Briz, along with three other glitch artists, organized the first-ever GLI.TC/H festivals. The first one took place in Chicago, but the organizers realized they needed to bring together the greater international glitch community. The 2011 festival simultaneously panned out in Chicago, Amsterdam, and Birmingham, U.K.

Influential glitch artists have emerged from Chicago and onto the international scene. One of them, Jon Cates, coined the term Chicago Dirty New Media, a catch-all term that describes how digital tech can elevate an experience. Even if a glitch artist doesn't physically hail from the Windy City, she might attribute her style to Chicago's Dirty New Media.

Software and Art

Glitch artists largely rely on a technique called databending to produce their work. Using a basic hex editor, an artist can modify the most rudimentary information in a digital file, like a JPEG image or MPEG video. The artist can open these files in the hex editor and manipulate the binary information behind the files--the ones and zeros--typically with the "Find and Replace" function.

Transcoding, another trick behind this new media, involves opening a media file in program that is not expressly designed for it. It's what would happen if you opened up an image file in an audio editor, applied reverb, and then saved it as an image again. Yet another technique, datamoshing, deals with modifying video compression rates.

"One of the characteristics of glitch, aside from its origins in errors and system overload, feedback and so on, is that when people try to do it concretely, they often end up working directly with the material of the file itself, below the level of its representation, as an image, or as text, or as audio," says Hertz.

In general, glitch art is the process of exploiting misbehavior, however spontaneously or intentionally the defect occurs. The process can either take place in the electronic media or in the encoding behind it.

"The process is more important than the result," says Briz. "There's all kinds of things that can happen when you use technology the wrong way." It's one thing to make something that looks cool, but the aesthetic isn't the interesting part of glitch art; it's the process. Sometimes, the very use of technology takes second stage, as well.

"We tend to put the onus on our machine, but really something has happened in the system because it hasn't responded as per our designed or expected use," says Briz. The gist of the artwork doesn't have to center on how machines misbehave. It can represent any system's misbehavior, even taking a political tone.

Two French glitch artists helped create a concept piece called Corrupt.desktop. The experience starts when someone downloads the artists' Mac OS X-compatible computer program onto a display computer at an Apple store or retailer. After installing it, the app icon appears in the app dock, looking like Safari. An unassuming Apple customer could inadvertently open it, visually distorting the desktop. The cacophony of desktops breaking apart while Apple employees slowly realize it makes a political statement. The glitch then becomes a part of the whole store's ecosystem.

Corrupt.desktop has a spot on GitHub, in keeping with the spirit of sharing tools and artwork in the glitch art community. And the code behind the project, openFrameworks, is a C++ based program, created by artists, for artists.

Starting as far back as the '80s, digital artists started creating software especially for creating glitches in systems and machines. Hertz wrote a program called GlitchSort for students at the School of the Art Institute of Chicago, which took off in the online community. Monglot, by Rosa Menkman, and Kim Asendorf's pixel sorting program were also influential.

Social Media's Power

Apart from software, general advances in technology have made glitch art find a foothold in the artistic landscape. A small subset of artists experimented with transcoding in the nineties. Before that, artists manipulated analog audio signals and transmuted them into images.

"In the late sixties, it was hard to show computational art in a gallery," says Hertz, even if this type of art started showing up around then. You see it more now because technology has just gotten a lot cheaper. "A VR installation that cost a quarter of a million dollars to put up over at UIC, you can probably do for five thousand now," he says.

Computers are more powerful, too, and the Internet has liberated artists from galleries to circulate their work. A Yahoo group called "Databenders" surfaced around the time when Yahoo groups were just taking shape, and a Flickr group called "Glitch Art" brought glitch artists in contact with one another, including Briz. Facebook groups and Reddit threads have cropped up in the last five years.

After the third GLI.TC/H festival in 2012, Briz created 0p3nr3p0 with fellow glitch artist Joe Chiocchi to give anyone the opportunity to submit and share his or her work, as long as it had a URL. All artwork from 0p3nr3p0 will appear on a display during glitChicago's run at UIMA.

"This term glitch art was literally this tag in the community," says Briz. People began to use the term glitch art on the Internet, and it has literally unified the community. "Once there was that keyword, there was a way to bring people together."

For as new as glitch art is, it is influencing other modes of new media. Some of the pieces showing at glitChicago are derived from glitch art but would not be considered as such. Critics wonder if glitch art has passed its heyday and will go down in art history as another tool in the artist's toolbox. For now, though, it has captivated a tech-savvy generation of art patrons.

"The Ukrainian was looking for a younger audience," says Hertz. "There are people who have been coming in, saying that they have been living in the neighborhood but haven't ever come in."

How This Algorithm Detected The Ebola Outbreak Before Humans Could

0
0

When an infectious disease starts spreading, it seldom takes its time. And when that infection is called Ebola, any delay in halting its spread can take a very real toll in human lives. The trouble, of course, is that it takes time for people to even figure out that an outbreak has occurred. Thankfully, machines are getting smarter.

Nine days before the World Health Organization announced the African Ebola outbreak now making headlines, an algorithm had already spotted it. HealthMap, a data-driven mapping tool developed out of Boston Children's Hospital, detected a "mystery hemorrhagic fever" after mining thousands of web-based data sources for clues.

"We've been operating HealthMap for over eight years now," says cofounder Clark Freifeld. "One of the main things that has allowed it to flourish is the availability of large amounts of public event data being accessible on the Internet."

How This Disease-Seeking Algorithm Works

Those data sources include news reports, social media, international health organizations, government websites, and even the personal blogs of health care workers operating in affected areas. The team's custom-built web crawler traverses RSS feeds and APIs, analyzing the text from these content sources for disease-related terminology and clues about geography.

As anyone who's ever looked at the Internet knows, any bulk consumption of web content is bound to scoop up tons of noise, especially when sources like Twitter and blogs are involved. To cope with this, HealthMap applies a machine learning algorithm to filter out irrelevant information like posts about "Bieber fever" or uses of terms like "infection" and "outbreak" that don't pertain to actual public health events.

"The algorithm actually looks at hundreds of thousands of example articles that have been labeled by our analysts and uses the examples to pick up on key words and phrases that tend to be associated with actual outbreak reports," explains Freifeld. "The algorithm is continually improving, learning from our analysts through a feedback loop."

Disease Moves Fast, But Data Moves Faster

The latest string of Ebola infections became public knowledge on March 23 when the World Health Organization issued its first report on it. Since then, the outbreak--which appears to have started with a 2-year-old boy in Guinea--has spread to other countries in Africa and killed over 1,000 people.

By that point, HealthMap had already picked up on the spread of the virus, even if it hadn't been identified as Ebola yet. In this case, the automated detection of the disease didn't help stem the outbreak, but the promise of such machine intelligence is hard to deny.

In addition to the breadth of the content available online, Freifeld credits the "availability of inexpensive Internet hosting and computation resources" with allowing HealthMap to crunch and store so much data. Clearly, such a thing would not have been possible even five years ago. And with the trends of big data and machine intelligence being as young as they are, one can only imagine where technology like this is headed.

In the short term, the team behind HealthMap is busily working on improving its filtering algorithms and adding new sources of data, one of which is decidedly old-school.

"We allow anyone, anywhere in the world to submit a direct report of an outbreak event and as more people become connected, Freifeld says. "It opens even greater possibilities."

Are You Fluent In Emoji?

0
0

If someone adds a happy face to the end of a text message, you know what it connotes. But what about messages without any text, only little yellow faces? Can people really communicate exclusively in emoji?

We'll find out soon. Both Emojli and Emojicate are emoji-only chat apps that are trying to turn the little smiley faces into a dedicated language.

Emojli, still in development, takes things to the most extreme having users also register user names in emoji. Emojicate, on the other hand, positions itself like a Twitter competitor of sorts by allowing status updates, but also includes private a chat feature. We took them for a test-drive to see what life is like when lived through tiny emotive icons.

The cruel reality, whether you think emoji-only communication is great or not, is that it's still incredibly hard. With 100+ character choices, it can be difficult to pick the perfect symbols in a timely manner.

Communicating with emoji is kind of like texting on traditional dial pad used to be. Punching out letters using the numbers 1-9 was excruciating until the predictive input T9 came along, and even that wasn't much better.

This shouldn't be any surprise, though. Snapchat has been pushing a generation to communicate more in pictures. GIFs are more popular than ever, despite being more than 25 years old. Part of the social media trend has been to push people toward sharing information that's glance-able. If it can't be instantly consumed, there's also no chance it goes viral. Even though Snapchat doesn't deal in emoji, the trend toward not communicating in words makes it turning down a $3 billion acquisition look less foolish.

For those that do need help putting their feelings into pictures, Emojimo is here to help. The iOS app translates regular text sentences into hip, emoji-filled, slang.

If you want some more emoji-related apps, Product Hunt has a good collection of new emoji products right now. Some of my favorites include Imoji, a way to make emoji-like stickers of yourself, and of course the Seinfeld emoji app.

But the big question is: Can you speak emoji? Test out your skills with a few sentences below.

Answers:

  1. "Hey, my car's broken, can you please take me to work and then bring me home?"
  2. "What time is school, want to to get breakfast?"
  3. "The newest album from Arcade Fire is hitting the spot."
  4. "We should make an emoji app, it'll be huge."

Visualize Your Favorite Music... On Your Shirt

0
0

If you're in to wearables, techno, and raves then you're going to want to check out the new Sync shirt created by Crated, a design consultancy and R&D lab based in New York City. The company describes Sync as an "an audio responsive VJ Shirt" that visually connects its wearer to the background music in a club. This visual connection comes from an LED-laden patch that is inserted into the front of the Sync shirt that pulses at varying degrees of intensity based on what music is playing.

Early sketches of the Sync VJ shirt.

Crated says the Sync VJ shirt was inspired by the emergence of visual DJs that use light as much as sound in their performances at some of the most progressive clubs in New York, London, and Europe. What the Sync shirt does is allow clubbers to be active participants in the light shows instead of just passive watchers.

While light-up garnets are nothing new, the Sync is on the cutting edge of visual wearable tech because of the underlying technology used to create it. Most fashion wearables that have visual elements such as lights often have wires running throughout the fabric of the garments. But Crated has done away with the wires in Sync thanks to its collaboration with Botfactory, a startup that is currently Kickstarting Squink--the technology that allowed Crated to print super thin and flexible circuit boards right on the patch that powers Sync. Most impressive of all, thanks to Squink, Crated says they prototyped Sync in just 24 hours.

"Sync was a collaboration inspired by Botfactory's Squink," says Madison Maxey, CTO at Crated. "We had met the team about a month earlier and were so impressed by the implications for Squink, especially after we had run into come frustrating PCB troubles with an earlier project. We were excited to see Botfactory crowdfunding and decided to propose a collaboration using Squink boards in wearable technology, as they're exceptionally flexible and really beautiful if properly designed."

As you can see below, the Squink looks like your typical 3-D printer, but it has a dozen nozzles, which allows it to print circuits rapidly.

The Squink at work.

"The real magic is in the conductive cartridge that makes the traces," Maxey says. "Essentially, you upload an image or Gerber file of your circuit and Squink can print it in about five minutes. There's a pick and place element as well, allowing anyone to quickly prototype circuits. Mari [Kussman, CEO and cofounder at Crated] and I are massive fans of the team. They're total hackers at heart."

According to Squink's Kickstarter page, Botfactory came up with the circuit board printer because electronics fabrication is an expensive and time-consuming process--especially if you're a small startup. "Large manufacturers are expensive and have a long turnaround unless you want to create hundreds of boards, and there is no easy solution when you want to prototype quickly from home," the company says.

So the creators of Squink wanted to give the same speed and flexibility to hardware makers that software developers enjoy. And according to Maxey, the team behind Squink has succeeded wildly in their goal.

"Hardware developers can save time and money with Squink, as you don't have to play the whole PCB waiting game or buy chemicals to etch your own," says Maxey. "Our previous projects took weeks of waiting and testing PCBs to have functional hardware, but we were able to make Sync in 24 hours since we could rapidly iterate on boards."

Squink printing on Mylar.

"For the wearables space, Squink is great as it prints boards on paper, so projects can be ultra thin," says Maxey. "The patch that responds to music on Sync has a battery and microcontroller onboard, meaning the patch itself could be removed, and attached to another garment without any additional wiring. Often wearable tech prototypes have wires running everywhere. We were quite pleased that Squink allowed us to make something that looks clean from phase one."

Currently the Sync VJ shirt is a proof of concept but Crated says it's exploring a consumer-ready version of the shirt that can be worn to concerts and festivals. Squink is currently seeking $100,000 in funding on Kickstarter.

Viewing all 36575 articles
Browse latest View live




Latest Images