Quantcast
Channel: Co.Labs
Viewing all 36575 articles
Browse latest View live

Things To Consider Before Jumping Into Motion Control Development

0
0

With Apple's acquisition of PrimeSense there's been a lot of excitement that motion interfaces are ready to move into prime time. But there are a lot of ways to do this kind of interaction--here's a primer for developers and designers that will have to learn this new design language soon enough.

Ask Yourself If Motion Control Is Really Necessary

Recently I was talking to a friend of mine in London who owns a product design firm. As a well-known product designer for hire, this friend (who asked to remain anonymous out of respect for his clients) has a range of clientele from the largest multinationals to small-time entrepreneurs who want him to bring their product ideas to market.

A few weeks ago a wealthy client--and self-proclaimed "Apple Geek" who likes throwing money at personal technology products he may or may not ever bring to market--walked into my friend's office and said, "Apple just bought PrimeSense. 3-D motion is the future and my product can no longer just have an app remote. People need to be able to interact with it by waving their hands in the air or it won't be cool."

The problem was this client's product was an Arduino device that allows users to control the lighting in their house via an app that wirelessly talks to the module that gets plugged into their lamps. Plugging in dozens of Arduino modules to lamps and controlling them with a smartphone is relatively easy for the user, as all modules can be viewed from one app, but redesigning the product to enable the user to interact with it remotely by gestures would make the product needlessly complicated (not to mention necessitate a complete reengineering of the product and drastically increase its cost)--all for the sake of giving it a "cool factor."

Luckily, my friend was able to talk the client out of his decision in about 20 minutes. And while most professional developers probably wouldn't completely revamp an existing product solely because Apple might be doing something with motion control in the future, this story does bring up a good point: If you are thinking of getting into motion control, ask yourself if it's really necessary--is it something that would truly make your hardware product or app better? Or does it offer nothing more than a brief "wow" factor?

And be honest with yourself because even the experts in motion control know the new human-computer input method of the future isn't for everything.

"We strongly believe in using the right tool for the job," Jon Altschuler, director of Creative Technology for Leap Motion, one of the most well-known leaders in 3-D motion products, tells me. "In some cases, typing or speech recognition is best, while in others it's the mouse or touch interaction. In other cases, it will be motion control, and we're continuing to explore new possibilities in this field. Ultimately, a combination of these tools may be the best approach--just like how the mouse joined the keyboard rather than replacing it."

Chuck Gritton, CTO of Hillcrest Labs, one of the pioneers in the sector whose Freespace Motion Control Technology can be found in everything from Roku remotes to the Kopin Golden-i head-mounted displays used by firemen, agrees. "Motion interfaces make sense when they make interactions more efficient or more effective. For example, cameras [that sense motion] are useful for an immersive experience like action games. However, they are not practical for everyday use for navigation and control on TVs or PCs. Computers and smartphones have used pointing interfaces for years because they are the most efficient way to navigate a GUI, and this is not going to change."

In other words, just because Tom Cruise looked so cool using motion gestures to control his computer in Minority Report and just because something similar is now possible in your product, don't jump into it just for the coolness factor. Stop, breathe, and ask yourself if by adding motion control are you really helping your user by giving them an input method that will allow them to control the product in easier or innovative ways. If motion control accomplishes neither of those things, don't add it.

However, if motion control would benefit the user, the next step is to understand how your users move.

Get To Know The Four Types Of Human Motion

As most software developers come from a computer science background and haven't usually had any physical anatomy or kinesiology training, the jump into designing for motion interfaces necessitates a need for developers to familiarize themselves with the biological side of things for a change: starting with an understanding of the basic types of motion.

"Today, people are conflating terms such as 'motion,' 'gestures,' '3-D,' and 'pointing'--when in fact, each of those terms means something very specific from a design and UI perspective," Gritton says. "It is very important to realize that the terms '3-D motion' and 'gesture' are too limiting. Among human computer interaction designers the full suite of human motions are often lumped together with the term 'gesture.' But, the term gesture should not be used as a general term."

Gritton says there are actually four main types of 3-D motions to consider:

  1. Natural motion. "Humans invoke natural motions continuously to perform our day-to-day activities. These motions are based on the structure of the human body. Eating, running, or hitting a ball all involve natural motions; to be useful the system must transmit those motions as closely as possible in their exact representation to the application or when you are playing video golf one person's hook will look like another's slice."
  2. Pointing. "Pointing is a specific motion we learn to identify at an early age to communicate preference or to make a selection. To implement an accurate pointing solution the design must eliminate impairments like human tremor or the motion that occurs when a button is pressed."
  3. Movement around an axis. "A third type of motion input is the tracking of movement around the X, Y, and Z (roll, pitch, and yaw) axes that we call Virtual Controls. For more than 100 years electronic devices of all kinds used knobs and buttons for control. Given the design of our arm and wrist, these input technologies were excellent for making both coarse and fine adjustments. In today's touch and motion controlled products, rotations of devices around the roll, pitch, and yaw axes can replicate the mechanical devices of old to efficiently provide the same level of precision as the mechanical variety. It's the emulation of these manual controls that leads us to use the term 'Virtual Controls.'"
  4. Unique gestures. "Finally, we are back to gestures, which is the fourth type. While the nuances of natural motions must not be lost from one user to the next, gestures are generally quite different. Each of us might make the same gesture quite differently. For example, each person may have a slightly different wave, but the system must interpret each one the same way--as with the command, 'hello.' Gestures must be interpreted to be useful."

It is only when a developer understands the four types of human motions that they can choose the most appropriate type of motion their user will need to perform to interact with their product. As Gritton says, "If 3-D motion has an advantage for a specific use case, the developer needs to understand how to distinguish that use case from the rest, and then be clear on what sorts of 3-D motion are useful."

Which leads to the next step…

Know Your Motion Control Interface Options

When most people think of motion control, they think of Microsoft's Kinect, the first version of which was made by PrimeSense, the company Apple just bought for unknown reasons. The Kinect uses cameras to capture images of a user's motion and then converts the placement of that user's body parts (like arms and legs) into software commands so they can control a game character on screen.

But camera-based motion control is only one type of motion input available on the market. Leap's type of motion control is entirely different. Their technology uses infrared LEDs and digital sensors to track a user's hand and finger positions above the device. Leap's technology is more refined on a micro level, tracking both hands of a user and all 10 fingers in real time.

Another popular type of motion control is the one pioneered by Hillcrest. Their technology relies on embedded sensors within a device like a remote or a heads-up display that incorporates accelerometers, gyroscopes, magnetometers, and other sensors and reads the orientation of the device in 3-D space to produce movement on screen.

As you can see the three types of motion input technologies available are radically different from one another, and the right one to use depends on your unique product. Leap's technology might be best where individual finger movement is key, such as an app that allows a doctor to manipulate a 3-D MRI scan. However, if you're making a product such as a set-top box, a motion control remote based on Hillcrest's tech may be ideal.

Making the right choice will not only depend on your understanding of the benefits and limitations of each type of motion control interface, but also how each one would facilitate the most natural movement of the user.

Keep Your Motions In Check And Be Prepared To Think Differently

For developers used to working in a 2-D world, Gritton says they'll actually have very little problems making the leap into the 3-D sphere. Instead, the real challenge will be one of design restraint and a new conceptual approach to problem-solving.

"Just because the toolbox includes the ability to detect 300 different 3-D gestures does not mean that the average Joe wants to memorize that new language just to watch a football game on his living room TV," Gritton says. "Capability does not equal desirability."

Gritton urges developers to use the motion capabilities in a way that is natural and simple for the user and enhances the overall user experience, not complicates it, noting that if something works extremely well--like inertial pointing for a TV screen--you keep it and build added functionality elsewhere.

Leap's Altschuler agrees with Gritton that coding for motion interfaces, on average, isn't any more difficult than coding for the 2-D world. What he says developers need to be open to is changing the way they look at solving a problem via software.

"The challenge is less technical and more conceptual," he says. "Motion control unlocks a great deal of power that forces developers to rethink the best way to interact with technology. It requires a lot of creativity to push beyond old ways of thinking to build something new, but we've seen lots of developers who are up to the challenge."

And for developers, overcoming those challenges will be worth it. In the coming years there will be billions of motion control-enabled devices on the planet being used in a diverse array of industries, from consumer products, to the fields of medicine, the arts, defense, sports, education, and more. And though motion interfaces won't displace the touchscreen, just as the mouse did not replace the keyboard, they will mark an important point in the history of human-computer interaction, allowing us to get more from and interact with our technology in ways that would have seemed like science fiction just five years ago.

As both Gritton and Altschuler told me, separately, "This is just the beginning."


This Autonomous Drone Thinks With Your Smartphone

0
0

Many drones, including the best-selling AR Drone 2.0 Parrot, let you control the drone with your smartphone. A new drone has emerged that actually thinks with your smartphone--as in, you strap your phone into it and the drone becomes autonomous. While this seems like a great idea considering the ubiquity of smartphones, it assumes that you've got a powerful enough phone--if you have one at all. Is this a feasible model?

The project, named SmartCopter, is the product of Vienna University of Technology effort to find a cheaper autonomous aerial vehicle that could help survey disaster zones. Instead of inside-inaccurate GPS, SmartCopter uses the smartphone's accelerometer, gyroscope, and magnetometer to determine the copter's position, which is fed to an Arduino microcontroller that moderates the rotors. In the video below, the team put a downward camera on the copter and let it "read" different paper squares to demonstrate its recognition ability--recognition for search-and-rescue surveillance or location mapping.

Even without the cost of the phone, however, the SmartCopter cost about €300/$412 to build, putting it well above the $299 price tag for the AR Drone 2.0 Parrot, which also comes with two cameras and easily synced smartphone software. Obviously it's unfair to compare an experimental model with a production line that's sold an estimated 250,000 units, but unless future SmartCopter users happen to have a sufficiently powerful phone, they'll have to pony up--and the three-year-old Samsung Galaxy S II the team used still retails for$250 online. Even in bulk, it costs about $155 per unit, though bulk rates can differ.

While production would definitely drive down the price, it wouldn't cancel the cost of buying a new phone should the user not have a sufficiently powerful phone. This could be offset by modular phones like PhoneBloks for upgrades or different configurations where the expense is incremental. No matter the model, you still risk losing the phone if your drone goes down. Even a bad tumble could crack the screen and let environmental crap in.

Whether it's financially feasible to slave drones to personal phones will depend on how low the price will go--but the advantages of such a system are worth mentioning. Software upgrade? Just download it to your phone. With a great API, programmers could cobble together specific software packages for drone functions. For example, if you're going camping, you could download a Map & Scout subroutine for your drone to get the lay of the land.

The team from Vienna University of Technology are pioneering in the right direction, and their publication on the SmartCopter (available for free online) won Best Paper at the 11th International Conference on Advances in Mobile Computing and Multimedia (MoMM 2013). Autonomous drones are not even close to the public's radar and development to use existing tech accessible to the public will shorten that gap. Unless it's affordable, however, this tech might stay out of the public's hands--especially if it endangers their precious, precious digital lifelines.

Mind-Controlled Computing Goes Open Source

0
0

Brain-Computer Interfaces (BCIs) are the systems that allow for direct communication between the brain and an external device. Existing BCI models are being worked on every day, but it's all being done almost exclusively by researchers and engineers with access to extensive resources. Joel Murphy and Conor Russomanno are two engineers that want to lower that barrier. What mind-control needs, they believe, are some makers.

OpenBCI is their solution. It's an open source, Arduino-powered BCI that allows anyone with a laptop to access their brainwaves. Murphy and Russomanno hope that by putting a programmable device that interfaces neurofeedback with whatever program a coder can cook up, they'll help give BCI development a shot in the arm. So they've launched a Kickstarter, where they explain why they believe the field needs an interdisciplinary approach:

"We feel that the biggest challenges in understanding what makes us who we are cannot be solved by a company, an institution or even an entire field of science. Rather, we believe these discoveries will be made through an open forum of shared knowledge and concerted effort by people from many different disciplines."

As exciting as the prospect of getting BCI tech into the hands of the DIY crowd is, it would be erroneous to believe that the field has been stalling. In the past year alone, we've seen wired implants that allow paralyzed patients to control robotic limbs, a Brown University team develop the first wireless BCI implant, and most recently, a non-invasive BCI that allows for direct brain-to-brain control.

Progress, it would seem, is happening fast. But Murphy and Russomanno think it could be even faster, and further outside-the-box. If their project reaches its funding goal, they just might be proven right.

Nobody Gets A Quadcopter To The Face Thanks To This Algorithm

0
0

In his 60 Minutes profile, Amazon CEO Jeff Bezos tried his best to assuage our fears about delivery drones in our neighborhoods. "Look, this thing can't land on somebody's head while they're walking around their neighborhood," he assured Charlie Rose.

Sure they can't.

Quadcopters are mechanically simple and built for durability. As aircraft go, they're also extremely agile, a flip side to their instability. That also makes them potentially very dangerous.

A group of researchers from the Dynamic Systems and Control (IDSC) department of ETH Zurich think they come up with a solution: an algorithm able to detect propellor faults and allow a quadcopter to be brought in for a safe controlled landing--even in situations in which it has lost 75% of its engine power. So how exactly does it work?

An Algorithm For Propellor Failure

Today, the software controlling most quadcopters is designed to have all four propellers functional at the same time. As ETH Zurich doctoral student (and key researcher on this algorithm) Mark Müller explains, drone software doesn't do a good job accounting for emergency scenarios.

"During normal flight, a quadcopter can produce three independent torques to control its attitude: 'roll,' 'pitch,' and 'yaw,'" says Müller. "If a propeller fails, this is no longer possible--the strategy for our algorithm is to give up the yaw torque, and let the machine spin uncontrolled about this axis. We then use the remaining propellers to tilt this axis of rotation, allowing the machine to move around."

If it's difficult to imagine what this might look like, here's a video of the algorithm in action:

"The hardest part of the work was the initial mathematics," says Müller. "How [do you] describe the system in a way that captures the relevant dynamics, but is still simple enough for us to analyze and manipulate? We started with Euler's law--a set of three differential equations that describe the rotation of a body as a function of the torques applied to that body. These equations are a gold mine of unexpected and surprising results, and trying to wrap our heads around this was probably the biggest challenge."

If the initial mathematics proved a steep learning curve, however, what surprised Müller about the eventual algorithm was its conceptual simplicity. "The derivation is quite complex, and required a lot of time," he continues. "The implementation on a quadcopter was relatively simple. The control law that we use (the set of equations that calculate the required motor forces) ends up being very concise: to calculate the motor forces only requires a handful of multiplications and additions."

This Is A Safety Announcement

While the idea of an algorithm for quadcopter safety wasn't one that immediately jumped out as the most exciting potential project, it was one the team at ETH Zurich nevertheless realized needed solving early on.

"We enjoy doing public demonstrations of our work, either inviting people to visit the Flying Machine Arena in Zurich, or taking the arena on tour, [as we did] at TEDGlobal 2013," Müller says. The two main questions they worried about were 'What happens if there is a power failure?' and 'What happens if there is interference on the radios used to control them?'"

What started as a safety precaution, however, quickly developed into an intriguing computer science question in its own right as the team realized the complexity of the problem they were dealing with.

"The idea of allowing the machine to spin at high speed, and still being able to control its position, had the perfect flavor for us," Müller says. "It sounded surprising and hard to do, but our intuition said that it should be possible. We first looked at the problem of flying the machine with only two propellers, and only later discovered a solution that uses three propellers. After that, we could also show that it is possible to fly the machines with only one propeller."

Improve Your Flying? There's An App For That

What makes the ETH Zurich algorithm different to previous attempted solutions is that it is entirely software-based--requiring no added hardware whatsoever. Previous solutions were mainly centered on physical additions to the quadcopter concept--often proposing hexa- and even octocopters, equipped with six or eight motors/propellers.

While these may have improved safety they would also have done away with many of the plus-sides of quadcopters--since the augmentations would make the machines heavier, more complex, less maneuverable, and more expensive to manufacture.

By creating an entirely software-based solution, the ETH Zurich team have not only found a way past these issues, but have also come up with a concept that could easily be applied to a large batch of existing quadcopters.

"[In this way] our work is not focused on quadcopters as such, but rather on algorithms and mathematics that allow us to fully explore and exploit the capabilities of dynamic machines," Müller says. "As such we do a lot of work on mathematical modeling and abstraction, allowing us to control complicated systems, and getting them to do interesting things."

Currently the ETH Zurich team has a patent pending on their algorithm, while a paper detailing the invention will be published in 2014.

A World Of Intelligent Machines

So could this be the key to making quadcopters the mass market machines Jeff Bezos (and others) clearly believe they can be?

"We certainly hope so--if flying machines are to become an accepted part of daily life, they will have to be able to deal with failures and unexpected events, and they will have to be provably safe," Müller says. "This algorithm might be a part of a larger safety suite, which helps to reduce the probability of a machine falling from the sky and thus promote the use of these machines in everyday life."

All indications indicate that quadcopters are an immensely valuable tool that we're about to see a whole lot more of. Whether it's a consumer delivery service like Amazon Prime Air, serving food in restaurants, or flying defibrillators to people requiring medical assistance, the potential applications are vast--and growing on a daily basis.

Thanks to ETH Zurich, we're now one step closer to seeing these solutions realized.

"Machines that significantly interact with their environment are fundamentally different than mobile phones, tablets, et cetera," says ETH Zurich group head and co-founder of Kiva Systems, Raffaello D'Andrea. "You can't 'reboot' a flying machine if something goes wrong, for example. Safety, reliability, and predictability are going to be the most important attributes of intelligent machines."

How Computer Viruses Can Go Airborne

0
0

One day Dragos Ruiu noticed that his MacBook Air was behaving like it had a virus. Over the next few months, things got stranger: one of Ruiu's other computers running Open BSD started to modify its own settings and delete data without any explanation--even though Ruiu, a security consultant, had removed all of the networking cards from the machines as a preventative measure.

How were his computers infecting each other without being connected?

The Impossible Infection

Ruiu assumed his machines were protected because of the "airgap" between them, a slang term which refers to a computer that is not connected to the Internet or any other computers by anything except... air. And yet somehow Ruiu's virus was able to go airborne--meaning that it could infect computers it wasn't connected to.

Ruiu called the anomaly "BadBIOS" and has spent much of the past few years investigating it--although up until recently it hadn't received much in the way of academic scrutiny. Now two German computer scientists named Michael Hanspach and Michael Goetz claim to have cracked the mystery (or a variation of it) by creating a proof-of-concept code which shows that the technology does indeed exist to allow malware to jump between two non-connected systems.

Why this should be an important discovery is clear: conventional wisdom states that removing a computer from the Internet makes it all but impenetrable. Just last month, while speaking at a Defense One conference, retired Capt. Mark Hagerott theorized that malware able to jump the airgap could "disrupt the world balance of power". It is for this reason that some people have referred to the mysterious BadBIOS as the "God of Malware" due its apparent invulnerability and potentially major consequences.

So how exactly does does Hanspach and Goetz's power-disrupting malware work then?

The Sound And The Fury

As it turns out, the answer is all to do with sound. In their paper--published last month by the Journal of Communications--the two computer scientists describe the way in which a computer's built-in sound card and microphone can be used to transmit information from one end (the client or application installed on the target system) to the other (the server) through the air.

"Our research was carried out half a year ago, so it wasn't motivated by [the BadBIOS] findings, but it's certainly a similar type of attack pattern," Hanspach says.

"We found that even there was no existing network interface, they could still communicate using the internal speakers and microphones," Hanspach says. "This is done with a high frequency audio signal which also makes it inaudible." According to Hanspach and Goetz this transmission can take place over a considerable distance: up to 64 feet if both systems are infected.

The technology Hanspach and Goetz used to create their prototype malware was based on software originally designed for underwater communication, which uses ultrasonic frequency ranges to transmit messages.

While the distances involved are impressive, however, Hanspach points out that transmitting data is slow going. In fact, in their study the researchers measured a transmission speed of just 20 bits per second, which averages to two keyed-in characters each second. To put that in perspective, an article like the one you're reading right now would take roughly 30 minutes to send via such acoustical methods: hardly the stuff for the kind of high speed espionage you see in a spy movie, where whole hard drives are transferred within minutes or even seconds.

Reaching this "optimal" speed is also dependent on having access to a clear and reliable data feed. This is easily established when working under lab conditions, but is likely to prove more difficult to replicate win the real world, where interference like cell phone signals, television, and other electronic emissions may all have a detrimental effect.

This last point was made by Malware Intelligence Team lead and Malware Unpacked scribe Adam Kujawa, who also speculated that infection (in order to allow the signals to be sent and receive) could not be carried out entirely without contact:

"While I personally don't think remote infection would be possible using this method, if an attacker were to use something like an infected USB that was plugged into an airgapped system, it could automatically install the malware and begin sending or receiving data."

"None the less, it's pretty interesting," Kujawa writes.

What Can You Do About It?

All of this suggests that, as far as infection methods go, there are far more efficient ways to infect or hack another machine. It's for this reason Hanspach says that your "average" computer user doesn't necessarily need to worry about the "airgap" problem--even if he or she has personally sensitive material, like private files or banking details, saved on their computer. A bit like crossing a busy road and worrying about being hit by lightning, there are far more imminent threats around.

"As an average user, you're likely to be connected to the Internet and other networks on a regular basis," Hanspach says. "You're more vulnerable to these other attacks--most likely over the Internet--than you are to the kind of acoustical attacks that we're working on. It's more of an issue for those working in high security."

It is in this area that Hanspach and Goetz focus their research--and where they have had the most feedback regarding their discovery. "Prior to our work I think it would be fair to say that there was not much awareness of the possibility of these attacks," Hanspach says.

"People working in security are understandably on the lookout for problems like this which could result in theft--since they represent remaining threats in high security systems," Hanspach says. "People need to be aware of these attack patterns because if, for example, you have a laptop that contains highly valuable information, you should not assume that the data cannot be stolen just because the computer is not connected to any existing networks. It's all about creating a more secure system for high security applications."

As to what concerned parties can do about the problem, Hanspach and Goetz have a few suggestions. "The most obvious solution would be to deactivate your audio devices, although you don't have to do this," Hanspach says. "Instead you could carry out audio filtering--to filter out parts of the signals that are do not register to the human ear, so that this sort of communication no longer becomes possible."

"These are the solutions we reported in our paper, although I am sure that far more countermeasures will be presented in the future."

Why Good Programming Projects Go Bad

0
0

In 1964, Fred Brooks found himself in charge of one of the biggest software projects ever attempted--the operating system for the IBM 360 series--a new family of general purpose computers which became known as mainframes. Brooks quickly realized that managing a software project was very different from producing hardware. He documented his observations in the software development classic The Mythical Man Month, in which Brooks famously declared that "adding manpower to a late software project, makes it later."

Although it was published almost 40 years ago, The Mythical Man Month is still astonishingly relevant today. Brooks writes elegantly, astutely and often entertainingly about the joys and woes of the craft of software development and the psychology of those who make it (one of his famous maxims: "All programmers are optimists").

Brooks went on to found the Computer Science Department at University of North Carolina and receive the Turing award in 1999. In 2010, he published his latest book The Design of Design: Essays from a Computer Scientist. Now 82, he talked to Co.Labs about project management, designers and why managers still make the same mistakes.

How did you first become interested in computers?

I grew up in a small farm market town in Eastern North Carolina--Greenville, North Carolina. I was reading, I think it was Time, in the public library and the cover had a drawing of the Harvard Mark I. I had been interested in manual methods of business data processing since I'd been about eight or nine (I started with a card file on my map collection), edge-notched cards, those kind of tools. So when I read about this I was fascinated, and I decided that's what I wanted to do.

The Mythical Man Month was based on your experience of managing the OS/360 project at IBM. How did you end up heading up that project?

I went to Harvard in Computer Science. (It was called Applied Mathematics in those days.) I did my dissertation under Howard Aiken, who had built the Harvard Mark I back in 1944. Then I joined IBM working on the Stretch supercomputer for four years. After being in charge of architecture for a new product line which did not fly, I was chosen to manage the System 360 computer family hardware. That was 1961. In 1964 it became evident that the hardware was on track and was being released into the factories on time. We were getting good cost estimates and all that, but the software was in big trouble. So my boss and I decided that I should go run the operating system project and see what I could do about bailing it out.

How big a project was the OS/360 for IBM?

I don't know the total cost, but I've heard numbers anywhere from 300 to 500 million 1960s dollars. Those dollars would be today about 10 times more valuable, somewhere in the neighborhood of $3-5 billion. At the peak the project had a thousand people, but the average was much lower than that--we built up. There were laboratories all over the world--Britain, Germany, France, Sweden, California, and New York State.

You have said that "Everybody quotes it (The Mythical Man Month), some people read it, and a few people go by it." Why is that?

It's all about disciplined decisions that are hard from a manager's point of view. Just look at the software mess over the rollout of the health law. They made almost every mistake in the book. The book has more than 500,000 copies in print and is used in most software engineering curricula, so many people have learnt from those mistakes of the past. But it's evident that if one picks people who are not software engineers to run major software engineering projects, you wouldn't expect them to have studied the subject. Therefore they make the same mistakes again. The biggest mistake with the health law rollout was that there was no-one in charge. That's the biggest of all mistakes. That project seems to have had neither architect nor project manager. How bad can you get?

Why is it so important to have both a project manager and an architect on a software project?

I think it's important even with a small team to distinguish the functions of the project manager from the function of the architect. When I was teaching software engineering, which I did 22 times here (The Department of Computer Science at the University of North Carolina, which Brooks founded), I always made even teams of four people choose a separate project manager from the architect, who was responsible for the technical content. The project manager is responsible for schedule, resources and such. I notice the same division of function occurs in the film industry. A film has a producer who is in charge, but the person whose name is last on the credits is the director. The producer is responsible for making it happen, and the director works for the producer, but the director is responsible for the artistic content. I think that the same separation of function which evolved in that industry applies as well in ours.

The project manager first has to be tough, second place has to be flexible. A motto I consider important is "Never uncertain; Always open". I saw that in Latin (Numquam incertus; semper apertus) on the ceiling of a German fraternity in Heidelberg. It's important to always have a direction and be going there. You can't steer a ship that's not underway. But it's also important to be open to changes in circumstance and direction and not just to be completely bull-headed. A project manager also has to be a people person. Project management is a people function and most of the problems are people problems.

Almost 40 years after the publication of The Mythical Man Month, why is it still so difficult to estimate how long it will take to make a large piece of software?

The problem is that we are working with ever-new technology on ever-new development. Product development always contains many uncertainties. In building a house we are working with known technologies and pretty well known specifications. Building a whole new thing like building a the first nuclear submarine or building the first space shuttle, you don't know what you're going to run into. Any piece of software is a whole new thing, unless you are just copying somebody else's.

In The Design of Design you say "I believe 'a science of design' to be an impossible and indeed misleading goal." Why?

The difference between science and engineering is not so much what scientists and engineers do as why they do it. The scientist may spend a lot of effort building apparatus for an experiment, but he builds in order to learn. The engineer may spend a lot of time studying materials or previous designs or users, but he learns in order to build. I think that distinction is crucial. So I believe the National Science Foundation's program hunting for a science of design, or indeed Simon's original Sciences of the Artificial book, was chasing a thing which isn't - a chimera - because building is different from learning the laws of nature. One can systematize physics; One doesn't necessarily systematize the process by which physicists work. There's not necessarily a science of science in other words, so to expect a science of design, which is really a quite different undertaking, I think is misguided.

You also say that "standard corporate standard design processes do indeed work against great and innovative design" but that process can also improve average performance. How does the designer know when to apply process?

That's the judgement that distinguishes good designers from poor ones I think. There's no way to tell a person how to do that. It's possible to train people to do that by putting them through many projects, mentored and advised so that they acquire the feel, the taste, and the judgement. There's an old saying "good judgment comes from experience and experience comes from bad judgement." I think that applies to the training of designers.

How should designers be trained?

Supervised, mentored design experience in a variety of projects. A rotation between different aspects of the design and construction process. The designer stands between the user and the builder. He needs to understand the user and that means spending time with users doing what users do. He needs to understand what building's about and that means spending time building stuff.

Most big companies have a pathway for training managers with job rotations, maybe with formal academic training (Brooks dispatched one of his own reports Edgar Codd, who later invented the relational database, back to grad school mid-career), and then working up multiple levels. In IBM the most promising of the young managers were made aide-de-camps to the chief executives. The Chief Financial Officer, the CEO, the Chief Operating Officer had lieutenants who were considered very promising and might one day inherit those jobs. This mentoring process I think is just as important for designers as it is for managers. The managers are the ones who decide who gets trained. They are aware of the need to train new managers but they don't have any reason to understand or believe intuitively in the need for training the designers.

What did IBM's CEO teach you about making things?

When I was leaving IBM to come to Carolina, Mr. Watson (IBM's CEO Thomas Watson Jr.) asked me down to the executive dining room for a one-on-one lunch. You know what that's about--that's an arm-twisting session. He proposed that I stay at IBM. I didn't want to do that. I said, "I really like to make things, and I want to get back closer to the technology", because I was a third-level manager by this point. He said something very enlightening and very memorable. He said "I like to make things too. Have to looked at Poughkeepsie (IBM's biggest computer manufacturing complex) recently?" And then I realized that this 10,000-person plant and laboratory complex was his creation. He had persuaded his Daddy to get the company into the computer business. I never would have thought of that as making things. He raised my vision a whole lot in one sentence.

All The Best Pictures From The Chinese Moon Landing

0
0

On December 14, China's Chang'e 3 spacecraft landed on the moon and sent back the first new images from the moon since the 1970s. We've collected some of the best images from the voyage from around the web--here are 21 of them.

The rocket pictured is China's Long March 3B model, carrying a moon rover named Yutu, or Jade Rabbit in English. The mission launched from Xichang Satellite Launch Center on December 2, and took almost two weeks to reach the moon's surface.

Artist rendering of Yutu.

The 300-pound six-wheeled rover is similar to NASA's own Mars rover Curiosity, and will explore a flat region of the moon called the Bay of Rainbows, or Sinus Iridum. The lander and rover each have separate scientific missions they'll pursue while on the lunar surface. The rover will look at the moon's geological structure and try to find natural resources, moving at about 200 yards per hour. The lander will perform experiments where it sits.

Apple Takes A Step Backwards From Flat Design

0
0

Back when Apple announced it would switch to a flat design language, we asked seven prominent iOS developers what they thought of the change. "Even though our app puts content first and has a minimal design, switching to Helvetica and putting our icons on a diet did little to make it feel like a good iOS 7 citizen," one developer told us anonymously (to preserve his developer NDA with Apple). "One of the biggest challenges we encountered was to follow the new design gestalt without becoming too generic and sterile."

As it turns out, generic and sterile were minor issues when compared to usability. Without button shapes, users familiar with iOS 6 may have trouble identifying which areas of the screen are tappable controls--an especially salient issue for the hard of sight. Now that Apple has seeded iOS 7.1 to developers, it appears that its designers have added in the option for users to turn on "button shapes" behind text-only buttons, which appear like rounded-rectangular shadows behind the button text. Here's what button shapes look like in the Calendar app:

Calendar List Viewvia Cult of Mac

An iOS developer named Steve Aquino says that while a lot of design snobs will criticize Apple for this move, it's more important that Apple stay true to its roots of building handicap-accessible devices, rather than adopting a certain aesthetic just because Jony Ive likes it. Writes Aquino on his blog:

Most of the commentary I've read on this change has been from designers who are upset that the borders are ugly, and they question why Apple chose to add them. From a pure design perspective, aesthetically speaking, it's perfectly reasonable to criticize the new shapes. They are indeed ugly, but the overall importance of this new addition trumps the way in which they're presented. That is to say, regardless of how the buttons look, the sheer fact that they add a level of desperately needed contrast makes the buttons a huge usability win, and likely--rightfully--will garner much praise from the visually impaired segment of the accessibility community.

Prior to his death, Steve Jobs was said to put his team through hell just to get the right wood-grain or leather pattern, much to the ire of people inside and outside Apple. But Aquino points out that his slavish dedication to real-world surfaces was at least more user-friendly (if visually overwrought) when compared to iOS 7's new flat design.

From a AX perspective, what the new Button Shapes do is restore a sense of explicitness to iOS 7's interface. These types of visual cues are so important to many visually impaired users, myself included. Whereas previously I struggled in identifying whether a label was an actionable control or simply a label, iOS 7.1's Button Shapes hearken back to the iOS 6-style, This is a button. Tap me!, level of usability. And therein is the point: usability. As I stated, it's perfectly valid to wince at and decry the visual design of the new buttons, but make no mistake, the addition of this feature is a tremendous improvement for visually handicapped users such as myself. These buttons will make iOS 7 infinitely more usable than it is today, and Apple absolutely should be applauded for addressing a serious issue--not only for me, but even for the normal-sighted as well.


Three New Gadgets That Shoot Full 360-Degree Photos

0
0

Eye Mirror is trying to attack affordability and compatibility for its lenses with a new Kickstarter campaign. The most appealing aspect is being able to attach an Eye Mirror lens to a new GoPro and take 360-degree video. The lenses are also able to attach to compact point-and-shoot cameras as well as DSLRs. The 360 degree lenses start at $195 for the early-bird Kickstarter price and go up slightly beyond that.

In addition to the hardware, Eye Mirror provides advances on the software side, being able to easily edit the 360-degree video with any industry program and convert the point of view using drag-and-drop software. The company also has what it calls a "YouTube like" site to which you can upload your 360-degree video and allow anyone to interact with it.

Another product in the 360-degree space is Panono's panoramic ball camera, which is currently in the middle of its own crowdfunding campaign. The camera ball was originally shown off in 2011 as a concept, but is now on the cusp of becoming a reality. It takes on a completely different approach from Eye Mirror's lenses, but still manages to bring the effect to the consumer level. Throwing the camera ball in the air produces an amazing, interactive photograph of your surroundings. The ball is available during its campaign for $499 and afterwards for $599.

Oculus Rift has been an integral part of the 360-degree movement--originally designed for virtual reality gaming--and both Eye Mirror and Panono are compatible with the head-mounted display.

Simplifying the concept even further, BubblePix successfully funded its BubblePod, an add-on designed to bring the 360-degree experience to phones. Steadier and more complete than a panorama image, the BubblePod spins a phone around and with the company's app provides an interactive image. At about $40, the BubblePod is aimed at a slightly different audience than both the Panono or Eye Mirror, but expands on the promise of interactive 360-degree images.

Apple Could Beat Bitcoin Out Of The Retail Market

0
0

Bitcoin is winning over some retailers for its cheaper transaction fees and digital access, but it hasn't permeated in-store purchases on a grand scale and might run into an unlikely contender: Apple. With its in-shop iBeacon platform, Apple could easily out-convenience Bitcoin and leverage its massive infrastructure to lure retailers away from the cryptocurrency market.

Apple's anti-Bitcoin strategy is taking shape: Apple recently strongarmed the messaging app Gliph into eliminating Bitcoin sending on its iOS app, which follows Apple's policy to oust the digital wallet apps Coinbase (back in November) and Blockchain (in 2012) from the Apple store, citing vague violations of its Terms of Service. Apple doesn't want Bitcoin purchases happening on its servers.

Which is where Coinpunk comes in, an open source Bitcoin wallet allows you to keep your wallet on your own server. Current Bitcoin wallets like Coinbase hosts wallets them on their servers, a dependency which Coinpunk creator Kyle Drake says makes them more akin to bank accounts. Using Coinpunk's mobile web interface, even iPhone users can access their wallet and use a QR code scanner, which is the preferred way to enact Bitcoin transactions. With Coinpunk, Drake is hoping that private safety and simplicity will win people over from the transaction fee-heavy current online market and into the privacy-friendly world of Bitcoin purchasing.

But you'll have to pry them away from their iPhones. Apple's betting big on another technology to facilitate the retail experience: iBeacons, which Apple's been inadvertently preparing for since equipping Bluetooth Low-Energy (BLE) in iPhones since the 4S, amounting to about 200 million devices worldwide. BLE allows phones to talk to iBeacons (and vice versa) quickly and energy-cheaply, facilitating those fancy in-store apps promised in Macy's and the Apple Stores that point out deals and, in the future, will likely allow on-the-spot purchases. If Apple wanted to keep Bitcoin out of these stores, they could mimic their App Store policy and require iBeacon users to similarly exclude Bitcoin purchases.

Combined with the iTunes media marketplace and its Passbook app, local purchases via iBeacons are setting iOS devices up to be central purchasing hubs with all your credit card and financial data keyed in. Tim Cook said last July that there are over 575 million active iTunes accounts, which dwarfs PayPal's 110 million active accounts. If (or when) Apple decides to open its iTunes store into other markets, it already has the brand trust and device saturation to mainline purchases within its network.

Bitcoin's decentralized freedom comes at a price: trust. Banks have the ability to sometimes recall fraudulent purchases or withdrawals, but Bitcoin's exchanges are intentionally irreversible. SpendBitcoin's list of places that directly accept Bitcoin has a front-and-center disclaimer stating: "All of these sites are listed by the site owners. We have not vetted them. Please search and ensure they have a good reputation before sending bitcoins to them."

Bitcoin's first Black Friday had just over 400 retailers participating in special deals. As TechCrunch points out, there's no reason for big-name outlets like Google, Amazon, and eBay to participate since they have their own digital wallet services (Google Wallet, PayPal, and Amazon One-Click). While Google Wallet is streamlined through Android devices, Apple's still got the in-person retail hook with iBeacon. If it can keep Bitcoin out of its markets, Apple's got the best shot of keeping American retail safely within the iOS family.

As Music Downloads Decline, Expect More Anti-Spotify Anxiety

0
0

It's official: We're buying less digital music. Just like vinyl, cassettes and CDs before it, the digital download may have reached it peak, with total sales dropping 4% from last year. The culprit? It's complicated, but expect the already-raging debate over Spotify, streaming and the future of music distribution to heat up.

Here's a breakdown. In the first half of this year, U.S. music fans paid for 25-30 million digital tracks per week, according to Billboard. In October and November, that number dipped below 20 million. Billboard blames "a web of interrelated stories that show new technologies affecting consumer behavior" for the decline, with the most obvious culprit being that little green and black icon on your home screen.

Spotify, the frontrunner in the growing all-you-can-stream music subscription space, is already the source of ongoing controversy within the music industry. With the news that paid downloads are starting to erode, the anti-streaming rhetoric will likely heat up even further. That's because of longstanding anxieties over the economic viability of the streaming model and whether or not artists can get a fair payout from the arrangement.

The Back-And-Forth Over Streaming Royalties

The anti-streaming argument isn't new, but it was voiced most prominently (and thus loudly) this summer when Radiohead's Thom Yorke and his producer Nigel Godrich pulled their side project Atoms For Peace from Spotify's library, citing unfair payment to new artists. As we argued at the time, Yorke and Godrich had a point: The financial mechanics of the streaming model might work for major label executives, but that doesn't mean the money always trickles down to artists in sufficient sums.

More to their original point, streaming simply does not pay out as much to new artists (as opposed to established bands streaming their back catalogs) as do the price tags affixed to physical records and digital downloads. In short, selling one's music pays more than renting it out. And now, barely a year after they outpaced physical album sales for the first time, digital music sales are starting to wane, at least according to the latest numbers.

Feeling the heat from this type of negative publicity, Spotify fired back a few weeks ago when it launched a dedicated site explaining its royalty payouts regime, as well as several new features geared toward placating nervous artists. In the near future, artists will have direct access to analytics about their streams and the ability to sell concert tickets and merchandise from their Spotify profile.

Among the trove of statistics and charts put out by Spotify in early December was a comparison of how much money the company pays out to rights holders alongside other digital sources like video sites (read: YouTube) and Internet radio services (read: Pandora). This is, of course, not an apples-to-apples comparison, but the numbers certainly shine a flattering light upon Spotify and its subscription-based competition.

Missing entirely from this breakdown, unsurprisingly, is a revenue comparison between streaming and paid downloads. For Spotify, that angle is far less flattering.

How Many Hours Per Week Should Freelancers Make Themselves Work?

0
0

The traditional workday is not kind to those who do creative work, and it doesn't seem to be getting better. If you work in a field that falls under the wide umbrella of "knowledge work"--anything from writing to programming to complex mathematical problem solving and beyond--you may find yourself in a work environment where forty hours isn't a ceiling but a starting point.

But the 40-hour work week assumes that forty hours is an ideal window in which to extract creative output from knowledge workers. The trouble is that it's not. Over the weekend, a story in The Atlantic posited that the standard 9-to-5 workday is deleterious to creative workers' output. The story cites an article on Salon that examines the origins of the 40-hour workweek--popularized by Henry Ford--and the mistaken assumption that knowledge workers can maintain the same eight-hour output without a drop in productivity.

It would seem then, that the traditional work week is not set up in a way that's optimal for creative work. But you can work around it, and maybe even use it to your advantage.

Doing creative work often involves setting aside longer, uninterrupted stretches of time during which progress can be made towards a long-term project. However, our workdays aren't just big blocks of time where all we have to do is "be creative." There are meetings to have, correspondences to maintain, and other tasks to complete.

It's to that end Cal Newport, writing for 99u, advocates a system he calls Getting Creative Things Done. It boils down to scheduling time for creative work, but in a very particular way: if we focus on process instead of goals, Newport believes that we allow time for valuable mental detours and release ourselves from the anxiety of having to hit an arbitrary goal post.

Newport's method may not work for everyone, but it comes with a valuable insight: studies show that scheduling is invaluable for doing creative work, how you schedule is far more important. Pay attention to yourself throughout the workday, and if you find you're more acclimated to certain tasks at particular times, have your schedule accommodate that. Need some help? Keep in mind that you can only focus for 90-120 minutes before you need a break, and block out your time accordingly. Consider tracking your progress with a tool like RescueTime.

By learning to manage your energy instead of your time, you can game the eight-hour workday to work for you, and possibly get rid of it altogether.

These Campus Celebs Are Huge On Vine And Instagram

0
0

Marketing on social networks is hard, and as we've all witnessed, bad marketing can be worse than no marketing at all. Maybe unsurprisingly, there is one demographic that seems to have mastered the art of social self-promotion far better than corporations who spend millions: college kids.

Kids like Logan Paul. "I would lose sleep about thinking how I need to get Vine famous," Paul, a 19-year-old freshman at Ohio University, told me over the phone. "I was jealous of these kids who were like 16 years old and 200,000 people know them... I was like, I want that, I need that and I can have it if I just put some effort into it!" Four months ago, Paul had 900 Vine followers. Today, he has 1.7 million.

A company called Sumpto is trying to capitalize on kids like Paul. Sumpto connects brands with top college influencers, giving the brands exposure to their target audience and giving a few kids a little extra cash. The company has found 33,000 of these campus social network celebs nationwide, 75 of which have followings over 100,000, averaging 343,000. Now the company is doing promotions for companies like JackThreads, whose brand engagement on networks like Instagram is a fraction of the engagement on content created by Sumpto's influencers.

Is this the promised land of social media marketing? Are these kids the real deal? I spoke to three of them to find out.

Logan Paul: The Practice-Makes-Perfect Marketer

Paul and his brother had been making YouTube videos for a while when they came across Vine. He says Vine's recent addition of the "reVine" tool was key.

"I had a sense of what people find funny because of the YouTube stuff that my brother and I were doing, but with Vine it's a little different because you have to pack everything into six seconds. So I'd look on the popular page--I'd look at what the top people were doing, and I'd see, 'Oh this is what people like, this is what they find funny on Vine.' So I would... take that and then twist it, mold it, mend it to make it fit my own unique style," he says.

With a little practice, he got good. "I made some funny videos, not bad, good for starting. But the one I made when I had 900 followers, it got reVined by another big Viner who had 400,000 followers. And that was huge."

Most marketing today isn't done on Vine, but that may be changing. The network appears to have survived the introduction of Instagram video, and has been adding features. I personally veered away from Vine after the reVine feature was introduced because it clogged my feed with videos my friends liked, but I had no interest in. But from a marketing perspective, this is an ideal feature.

Paul says he uses Vine as the so-called "top of the funnel" to draw people in with pure personality and lead them to his other social profiles. "Vine has been the biggest catalyst because it's a glimpse of my personality. People are like, wow, okay, this kid looks interesting, I'll go follow his Twitter too," says Paul. "If you create good content, then people are going to start to notice that."

He's onto something crucial that big companies aren't doing yet: differentiating social feeds with content meant for different purposes. Come for the entertaining Vines, stay for the tweets. With all the various forms of social mediums out there now, lots of companies figure that redundancy is easier. But a better strategy may be not to cross-post everything, thereby making unique personalities for yourself on each account.

Paul also pointed out to me that it's important to show interest in the community you're trying to be a part of. "Be interested, see what other people do, see if you can make friends and just stay up to date with the things that are going on," he says.

Kyle Herbert: The Natural

Kyle Herbert, 22, isn't like Logan Paul. He didn't study social media fame, and he hasn't really expanded outside his immediate context. But as a senior at Arizona State University, he has completely mastered one thing: keeping it casual.

"Sometimes, I think marketing companies cater to the individual because they need this, they want that," says Herbert. "Whereas if you throw a little humor into it, put it more on the casual side, people are more comfortable with it. I feel like it's the comfort game almost, where in advertising you know you're being sold something, whereas in social media, I'm subconsciously telling you what you should be wearing and what you should be using, but you're only thinking that because I'm like, 'Yeah, go check it out. Look what I'm doing.' 'Oh, that's cool. I follow you. I like you. I want to do this, too.'" Kyle has around 5,000 followers on both Instagram and Twitter, almost all of them within his school network.

But despite Kyle having an ostensibly natural grasp on the social media game, he came into it in an understandable attempt to make his big campus feel smaller. "For the first semester, I hated life at ASU because I would walk to class and then walk back to my dorm room and just not know anyone and never see anyone twice. It was really disheartening but I don't know. I figured out a way through social media and just through being friendly and congenial, I gained the hearts of all my peers, I guess. So now when I graduate from ASU and I go out to the real world, it's almost that same situation where I was once the fish, but now I'm being tossed into the even bigger ocean. I'm just going to keep going and keep working at it, keep doing everything that I can to gain new followers and keep those followers."

Social media is, at it's core, a way for people to connect. So, as Kyle points out, the best thing you can do is be relatable: People want to be able to tell that there's an actual person behind your handle.

Andy Rexford: AKA Anonymous Andy

Andy Rexford is different. His personal Twitter doesn't have tens of thousands of followers, and not many people on his Saginaw Valley State campus know who he is. But he's the brains behind @CollegeStudent, an anonymous Twitter with nearly 350,000 followers.

Being anonymous changes the game. There is no personal glory to be had, and the personality you have to craft is one separate from yourself and the presence you might portray on your own social media accounts. However, this is also the most similar to what companies and brands have to do every day. So how do you do it well?

"The shorter the better," Rexford, 21, told me. "Actually, ever since Twitter's new update, I get more engaged. For CollegeStudent, I like to create a lot of what I'm going through and what my friends are going through and about the hard time we're having and stuff--just things people can relate to."

Although the account is anonymous, this almost gives you more opportunity to craft an identity that can appeal to a huge group of people. It would be harder to present yourself as a niche product or account that people should follow, but, Andy says, anonymous accounts can be designed just for this purpose, like @CollegeStudent. "Anything that will engage your followers."

On these huge accounts with no visible person behind them, personal touches are crucial, Rexford says.

"It's good to use hashtags to get better hits. But visitors have to know who [their] customers are and sort out the stuff that [the customers] want."

A Google Hangouts Plug-in That Makes You A Nicer Person

0
0

Now that cameras come standard on every laptop, tablet, and smartphone, video chat is a standard feature. But we're still the same needy, anxious, erratic humans--and it shows when we try to communicate digitally face-to-face. A new Google Hangout plug-in is harnessing the power of facial recognition and algorithmic speech pattern software to get us to be nicer, more interested chatters. Can this conversational wingman improve how we talk to people?

Think of it like a play-by-play: The plug-in, Us+, monitors your conversation, tracking bar graphs of Positivity, Self-Absorption, Femininity, Aggression, and Honesty, and occasionally spits out pop-up advice like "Stop talking about yourself so much." The only drastic action the app seems to take is an automute function for the worst sin of all, talking too much.

Stick with the sportscaster theme. Us+ would probably reinforce good conversation habits like honesty and positive body language, but its on-the-fly social etiquette correction is a slam dunk for professional phone calls--like remote job interviews. Forget the nightmare of body language-less phone interviews: An Us+ conversation prods you into appearing more positive, receptive, and smiley, all great subconscious signals for competence.

Us+ is an open source joint project between artist-programmers Lauren McCarthy and Kyle McDonald, who previously worked together on FriendFlop, a Chrome extension that scrambles the names on your Twitter and Facebook timelines, "dissolving your biases and reminding you that everyone is saying the same shit anyway." In a similar vein, Us+ asks, is all this technological mediation between myself and everyone else a good thing?

Us+ has tools to measure our conversational habits. Perhaps, McCarthy told Fast.Co Labs in an email, there is a real potential here to use social analysis and feedback to bring us closer together and understand each other more:

"While we all might agree we would like to sound less self-absorbed, what does it mean to sound more feminine or aggressive, and what amount of this do we aim for? What subtle power and control is embedded in the technologies we interact with?" McCarthy said. "When creating or dealing with technologies that augment and change us, it's important that we keep asking ourselves how and why we are determining the end goals."

The plug-in is available for free (video chat using Us+ straightaway by clicking here) and its API is up on GitHub. Its linguistic analysis is based on Linguistic Inquiry Word Count (LIWC) developed by James W. Pennebaker at UT Austin, and inspired by work done with Sosolimited.

What Does Artificial Intelligence Really Mean, Anyway?

0
0

The great promise--and great fear--of Artificial Intelligence has always been that some day, computers would be able to mimic the way our brains work. However, after years of progress, AI isn't just a long way from HAL 9000, it has gone in an entirely different direction. Some of the biggest tech companies in the world are beginning to implement AI in some form, and it looks nothing like we thought it would.

In a piece for the BBC's website, writer Tom Chatfield examines the recent AI initiatives from companies like Facebook--which announced last week that it would be partnering with NYU to build an artificial intelligence team that hopes to develop a computer that will develop insights from enormous data sets--and argued that such developments are completely contrary to the classic definition of AI as a field.

Chatfield's argument is centered on a feature in the Atlantic on cognitive scientist Douglas Hofstadter, who believes that what Facebook is doing, along with other recent advances like IBM's Watson, doesn't qualify as "intelligence." Writes Chatfield:

"For Hoftstadter, the label "intelligence" is simply inappropriate for describing insights drawn by brute computing power from massive data sets - because, from his perspective, the fact that results appear smart is irrelevant if the process underlying them bears no resemblance to intelligent thought. As he put it to interviewer James Somers, 'I don't want to be involved in passing off some fancy program's behaviour for intelligence when I know that it has nothing to do with intelligence. And I don't know why more people aren't that way.'"

To that end, Chatfield argues that we've created something entirely different. Instead of machines that think like humans, we now have machines that think in an entirely different, perhaps even alien, way. Continuing to shoehorn them into replicating our natural thought processes could be limiting.

Some are inclined to agree. Writing for the MIT Technology Review, Tom Simonite reiterates just how bad computers are at tasks that are easy for brains, like image recognition. Simonite attributes this to the way we've been building computer chips. Namely, that it's going to be impossible for computers to imitate non-linear thought processes if we continue to use hardware that's designed to execute linear sequences of instructions--the CPU-RAM design called the Von Neumann architecture. Instead, an answer may lie with neuromorphic chips like IBM's Synapse, which are specifically designed to work the way our brains do.

The problem, Simonite writes, will be making them work on a larger scale. "It is still unclear whether scaling up these chips will produce machines with more sophisticated brainlike faculties. And some critics doubt it will ever be possible for engineers to copy biology closely enough to capture these abilities."

As it turns out, copying biology is really damn hard. While scientists like Hofstadter prop up the platonic ideal of AI as a computer that functions the same way our brains do, perhaps the Deep Learning approach embraced by Google is the means by which we get there. Maybe you don't need neuromorphic chips to build a real-life HAL. Maybe you just need lots and lots of data.


Amazon's Smartphone Could Sport A Totally Alien Interface

0
0

Unidentified sources from the Taiwanese-based supply chain, Primax Electronics, are claiming that Amazon placed orders for compact camera modules (CCMs) for intended use in a soon-to-be-released smartphone. If the rumors are true, the new phone will be equipped with "floating touch technology" allowing the user to use the screen without ever actually touching it.

Primax will supply three of the six CCMs, consistent with the much speculated about "Smith"--Amazon's codenamed pet project, which is said to have "a display that moves to 'give the impression' of 3-dimensional depth and motion." By combing six front-facing cameras, the device will track the user's eye and head movements to match his/her point of view.

Similar to the Kindle Fire HDX, the phone will run on a modified version of Android and may possibly even include an adapted version of their successful Mayday button. The Seattle-based franchise, never stemming too far from their roots, will include a recognition system in their software. It will be able to pinpoint an object and bring the user to "a relevant page in Amazon's online store in case they wish to purchase one."

There are currently more than 209 million active Amazon customers, with Jeff Bezos constantly pushing the company into new channels--AmazonFresh, Amazon Prime, and most recently Amazon Drones. So it is not surprising that the Amazon empire has finally chosen to launch its smartphone.

Their smartphone will optimize the pool of existent and loyal "Amazoners," but will probably make some easy converts too. Apple, meet your new opponent.

Here's 6 Last-minute Ways To Maximize Holiday App Sales

0
0

'Tis that time of year again when app developers are in a year-end push to move as many copies of their apps as they can. The holidays, after all, can be incredibly financially lucrative for developers. Tens of millions of people across the world will be opening up brand-new iPhones, iPads, and iPod touches on Christmas day and they'll be hungry to fill them with apps. So how can you, as a developer, take advantage of this holiday gold rush?

1. Forget About The App Store Freeze

This applies to developers selling on Apple's App Store. In years past developers could game the App Store Top Charts system by drastically lowering the prices of their apps in the week before Christmas. This would result in higher sales, which would see the apps rise in the Top Charts rankings. What developers would lose in the initial lower prices they charged they would gain in volume sold over the period of the App Store freeze because their apps would be stuck in a high ranking position in the Top Charts for almost a week.

EA was one of the first to use this tactic to great success in 2010. However, as VentureBeat points out, trying to game Apple's App Store freeze, which is down to about an 8-hour window, is now irrelevant--and could actually hurt you:

Just in case you're still thinking about trying to time it just right, there's yet another reason not to gamble. This reason hinges on the difference between the app ranking calculations on the backend and what's displayed in the App Store.

The freeze is technically only a front-end freeze: the Top Charts visible to users browsing the App Store stop moving. Behind the scenes, though, the algorithms that calculate exactly where each app ranks continue to update throughout the freeze.

The net impact is that pausing app marketing campaigns during the freeze will drop you further behind your competition as the algorithms notice fewer downloads of your app. When the front end is once again updated, your app could suddenly drop--at exactly the time you don't want it to.

Simply put, many developers still think the freeze will solidify their spot in the Top Charts for as long as it did in the days the App Store was like the Wild West. Now that it no longer does, dropping the price of your app to as low as you can go could cost you a great deal of revenue, which you won't necessarily make up through volume sales if your app can't stand on its own in the Top Charts rankings over the holidays.

2. Do Not Forget About The iTunes Connect Shutdown

Forgetting about the App Store freeze? Good. Forgetting about the iTunes Connect shutdown? Bad. Very, very bad. Each year Apple shuts down iTunes Connect for a period of five to six days over Christmas so Apple's engineers can have some quality time with the family. This means that during that time developers can't submit new apps, app updates, or make any kind of pricing changes to existing apps.

This year the shutdown runs from Saturday, December 21 to Friday December 27th. This means that you'll want to be sure you've submitted the latest stable release of your apps, polished off the text in your apps' descriptions, and added the best looking screenshots you can for your apps--all before iTunes Connect goes dark on December 21st.

Last year, I knew a developer who had spent the fall working on a killer version 2.0 of his app. He forgot about the shutdown and was forced to wait almost a week to submit it, missing holiday sales entirely.

3. Treat Your App's Description As If You Were Handing In A Term Paper Whose Grade Would Decide If You Graduated Or Not

I know plenty of developers who use this period before the shutdown to really look at their app's listings. This is an excellent thing to do and one that developers should do more often.

So often devs are concerned about code, they forget about clarity of text and how their app's descriptions actually look on the pages of the iTunes Store on the Mac and PC and in the App Store on the iPhone and iPad.

While many end-users decide to buy an app or game based primarily on screenshots, the icon, and reviews, it's still important for developers to review the prose of their listings with a fine-tooth comb. Content is, of course, important: Is the description precise? It should tell just enough. No one wants to read a book-length tome of what an app does. Two paragraphs is good. Three is probably the max.

But just as important as the context is the way your description looks to the eye. Keep things tidy and easy to navigate. Bulleted lists are fine, but be consistent. And while it's helpful to add pull quotes from good reviews from major tech sites, place them after the first paragraph of your app's description. People who are thinking about buying your app between courses of turkey and stuffing want to get to the meat and bones of what your app does. They don't want to have to read through 30 pull quotes first.

4. Make Your App Festive

The holidays are the time of year when even the Scroogiest of us are feeling festive. Studies have shown that when you dress a product (no matter if it's a cheeseburger or a lawnmower) up in holiday-themed swag, people are more likely to buy it in the month of December.

For apps, this is very easy to do. Consider changing your app's icon for a period of time to give it a festive look. This is something game developers do very well. You'll frequently see holly-trimmed icons for games or even a menacing alien wearing a Santa hat. Does it look silly? Sure. But people are in the holiday spirit and doing something as simple as dressing up your app's icon shows you went the extra step. Holiday icons aren't only for games either. Have a productivity app? Consider replacing the pencil in its icon with a candy cane. A travel app? Spice the icon up with Santa's sleigh. You can do the same thing for your app's screenshots by adding a simple holiday-themed border around their edges.

These are small, temporary changes that don't affect the code of the app at all. What is does do is show the end user that you are a real, live person taking the time to put the human touch on your app to make their festive season more bright.

5. Be Active On Social Networks Like Twitter and Facebook

Just because iTunes Connect is shut down the week of Christmas doesn't mean you can't use other tools to push your apps. This is where an active presence on Facebook and Twitter come in.

At the holidays many of us will spend quite a lot of time on our mobile devices after we've stuffed our faces and spent time with the family. We'll play games, browse the web, and check Twitter and Facebook--a lot.

If you can spare an hour or so each day over the holiday to bump up your social network presence, do so. Be active on Twitter, offering tips about your app and how to use it. Reply to people who take the time to tweet you asking for help or telling you about a problem. If they tweet something nice, retweet it and publicly thank them.

Besides Twitter, make sure you've got a Facebook page dedicated to your app already set up and be active on it. On Facebook you have more room to engage with users because you aren't limited to 144-character messages. Post cool screenshots from your app or game. Dole out hints throughout the day. Reply personally to anyone who bothers to write something on your timeline.

Engage with your users and they will tell others about your app--I guarantee it.

6. Reach Out To Technology Journalists

Technology journalists are always looking for cool apps to tell their readers about. And just because it's Christmas, things are no different. As a developer, you should try to create and maintain good relationships with tech journalists throughout the year. If you've done this, don't be afraid to contact them over the holidays and tell them about something you've done new with your app (Did you add new levels to a game? Did you drop the price for the holidays?) or just ask them if they'd be interested in letting their readers know about it.

A tech journalist might not drop everything and write an article about your app over Christmas, but it doesn't take any time at all for one to tweet news of your app's holiday sale price to their followers.

Can Audio Journalism Compete In The Age Of Apps?

0
0

Jim Schachter thinks more people will tune into for high-quality audio journalism in 2014. Schachter, the VP of News at WYNC, understands that more people than ever are connected to listening devices, yet fewer people are engaged in audio storytelling, the kind you find on radio. Why? Everybody's too busy playing Fruit Ninja.

NPR is hoping that a recent infusion of $17 million can help it revamp its digital platform and provide high quality content independent from traditional radio broadcasts. Schachter says that will mean more work for him and his colleagues to produce news reports and talk show segments as standalone reports because formats and work flows will need to change. It's all part of getting more stories into new types of apps, Twitter streams, and even algorithmically into playlists people are already listening to.

What might that digital app future look like? NPR is trying to figure that out, currently in the middle of an app redesign--part of which will incorporate location-based news from affiliate stations. Until the new app is out, sometime in 2014, the new Swell app might hold at least a few answers.

Swell takes bits of radio, podcasts, and other audio sources to create a constant stream of storytelling tailored to your current preferences. Fully skippable and customizable, the stream of audio that Swell produces becomes smarter and, hopefully, relies less on the user over time and more on itself to deliver your news and content.

Print journalism is also interested in audio as a means on communication. Fast Company, for one, has experimented with SpokenLayer to translate text into spoken word. It's still early and hard to gauge whether people would rather listen to content they would otherwise read, or if pure audio journalism just needs a makeover.

Schachter points out that there are still people that want hour-long talk shows, but there's a bigger audience that wants something different. The driveway-moment era is coming to a close, but NPR hopes it can create new types of moments and give audio journalism a comeback.

This Apple Alum Wants To Tame Your Insane Photo Collection

0
0

The age of smartphone photography has ensured that virtually no event--from the special to the catastrophic--goes undocumented. But it's also created a problem: A huge, unwieldy mess of images, scattered across numerous devices and interfaces. Today, a former Apple executive named CEO Tim Bucher, who once headed the Mac OS team at Apple, is launching a company that aims to focus specifically on home media management with a hardware device called LyveHome.

Lyve is an upcoming cloud-powered service that will let users index, sync, and organize all of their mobile photos and videos across devices. Together with an optional piece of storage hardware called the LyveHome, Lyve will ensure that the photos you take on your iPhone are ready available on your Nexus 7 or laptop and that the whole massive lot of them is easily managed from a common interface.

"We're collecting way more data of our own lives than we've ever collected in history," says Bucher. "Two generations ago, a person might have taken 50 pictures in their lifetime. Last generation, it might have been a few hundred. For this generation, it's literally millions. And today, it's all up to each person to manage that digital content."

It's a problem with which just about any smartphone owner is familiar: We wind up snapping thousands of photos on our phone, which we may or may not get around to transferring to our computer. From there, some of us will eventually back it all up to an external drive, where those memories will languish in whatever oddball folder structure we came up with. Meanwhile, the number of photo-taking devices in our lives seems to slowly proliferate, compounding the organizational nightmare of it all even further.

How Lyve De-Fractures Your Photo Collection

When it launches to the public in April, the Lyve service will unify users' fractured photo collections by indexing the meta data for each image on any device on which the Lyve agent is installed. At launch, that will include any gadget running at least iOS 7, Android 4.1, Mac OS X 10.8, Windows 7, or Windows 8. On desktops and laptops, the index will include external hard drives as well. If there's enough capacity on a given Lyve-enabled device, Lyve will copy those images and videos over, keeping everything synchronized via the cloud. Think Apple's PhotoStream, but with support for video files and capacity that's practically unlimited.

The transfer of content from device to device will be smartly monitored by Lyve's algorithm, which will keep tabs on when and where new photos are taken, ensuring redundancy across your camera-equipped gadgets. If a photo taken on one device hasn't yet made its way over to the device you're holding, you'll be able to tap a thumbnail to load the image remotely. To prevent you from loading up your smartphone with vacation photos from 2009, Lyve will intelligently manage these transfers depending on the capacity available on each device. It also takes into account factors like remaining battery life, cost of bandwidth to the user, and even how likely the user is to lose the device.

In addition to the service, the company will unveil a device at CES next year that can serve as a central storage hub for your gargantuan collection of images. Sporting a five-inch touch screen, an HDMI port, and built-in Wi-Fi, the sleekly designed LyveHome will store up to 2 terabytes of images and video, which it can then serve up to other Lyve-enabled devices.

"The LyveHome is more like a server for your mobile devices," says Bucher. "So you can be out in the field taking photos on your iPhone and you don't have to worry about your device filling up. We create for an infinite camera roll for you."

The product's functionality raises a few obvious questions. For one thing, what about your semi-nude selfies? To keep personal images away from prying eyes, Bucher says Lyve will have granular privacy controls. Of course, that should only matter in scenarios in which the Lyve account is shared among multiple people. If you do misfire a seductive pouty-face mirror shot, Lyve will let you delete the image from all connected devices simultaneously (a feature that presents some risks of it own, selfies or no selfies).

So This Is What They've Been Building

The stealth startup, which was originally called Black Pearl Systems, first made headlines this summer when word got out that a former hardware exec from Apple was building a team of over 40 people from a variety of big-name tech companies. Indeed, to build Lyve, Bucher poached a rather impressive roster of talent from the likes of Apple, Netflix, Google, and Roku. "I wanted the experts in the world of Internet optimization on this team," Bucher says. "Essentially, the whole service team is from Netflix."

Not far from Lyve's Cupertino offices is the headquarters of the tech giant from which Bucher--and many of his new colleagues--came. All told, there are several former Apple employees working at Lyve, mostly on user experience. In addition to a gaggle of talent, Bucher borrowed from his former employer a certain ethos about product development.

"The core appreciation that I have from working with Steve Jobs is of vertical integration. The best user experiences are those that have as many of the pieces of that puzzle in place together. With Lyve, we wanted to create a truly end-to-end solution."

When asked how he managed to recruit so much top talent to work on Lyve, Bucher says that the conversations were "simple" because everyone he talked to agreed that the problem of fractured photo management needs to be solved and that his idea seems well-equipped to solve it. That's a convenient (although certainly not dishonest) answer for a startup founder trying to sell his new product. And while Bucher won't talk about the company's financing, suffice it to say that he has enough cash on hand to lure some talented people away from some big companies. As somebody who started working for Steve Jobs at NeXT, Bucher has been around Silicon Valley for a few decades, undoubtedly amassing quite a few important connections along the way. That can't hurt, either.

2014: Probably The End Of The Facebook Era

0
0

Facebook is losing its edge. The biggest name in social media has gone from scrappy upstart to tech establishment, and at this point, some sort of decline is inevitable. What's most surprising, though, is how little the company seems to be doing about it.

In a recent blog post, product designer Chrys Bader argues that Facebook's nature as a social network demands that it be looked at as a social movement with four distinct phases: Emergence, Coalescence, Bureaucratization, and Decline. It's not unlike the classic industry life cycle, but with one key distinction--a social movement seeks to be established in a culture. Once that happens, the movement becomes mainstream, and ceases to exist.

Bader writes:

What we're seeing is a fundamental shift in the perception of what Facebook means to society. It has become institutionalized. It's become the town square of the world. But that's not where the kids hang out.

And he's right. Facebook CFO David Ebersam confirmed it back in November: Facebook is losing teens worldwide. They're going elsewhere, using services like WeChat and Vine. Millennials, too. A recent Mashable story calls Facebook "the cigarette of 2013, the 'bad habit' many are trying to quit." Among the reasons? The signal-to-noise ratio is terrible, and everyone is on it--employers and parents as well as friends.

That last bit is relevant, but also obvious in 2013. Facebook is huge. Too huge, argues Jay Yarow for Business Insider Australia. It's becoming a social network singularity, one in which it's core product is a News Feed that works in ways that are hardly ideal. Despite the social network's very clear desire to have users share more and more--called Zuckerberg's Law by the New York Times in 2008--the News Feed can only handle so much while being effective. You could get engaged, writes Yarow, and thanks to the News Feed's algorithms, half of your friends might miss it.

"Facebook is now trying to cram so much 'sharing' through a single service that it is overwhelming many of its core users. Meanwhile, companies like Snapchat, What'sApp, WeChat, Line, Twitter, and Instagram (which Facebook owns), are now cleaving off types of user-sharing that Facebook would like to have owned."

While gaining new users isn't something that gels with Facebook's approach as of late, the company also shows a puzzling indifference toward expanding in parts of the world that aren't already well-connected. As David Talbot of the MIT Technology Review pointed out on Tuesday, although the company is a partner of Internet.org--an organization dedicated toward connecting the two-thirds of the world's population that doesn't have Internet access--it has done very little toward actually connecting anyone that isn't already online.

Does all this mean that Facebook will soon fade away into nothingness? No, probably not. Facebook as a social network will probably endure. But the Facebook Era of social media may be approaching its end. Unless, of course, auto-playing video ads cause everyone to come back in droves.

Viewing all 36575 articles
Browse latest View live




Latest Images