Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=240160364
Are wearable computing devices the new big security threat?
That's one question lingering after Lookout Security last month detailed an insidious hack attack against Google Glass: Just by getting Glass to "see" a malicious QR code, an attacker could force a connection to a malicious Wi-Fi or Bluetooth connection, then eavesdrop on all communications. Admittedly, the attack wouldn't have triggered a countdown to global doom, but it did highlight the automated, promiscuous network-connecting habits of mobile devices, Glass included.
Therein lies a problem with wearable computing devices: They lack either physical or virtual keyboards, and thus require a relatively greater degree of automation than your average Android device or iPhone. With that automation, however, comes the risk that the device may automatically do something bad, from either an information security or privacy perspective.
[ Could a kill switch help? The Trouble With Smartphone Kill Switches. ]
In some respects, this is a good problem for the wearable computing field to have. For years, it was hobbled by awkward input mechanisms -- corded keyboards, joysticks, trackballs. But in this age of small, high-speed processors, voice recognition and relatively ubiquitous Internet connectivity, the release of Google Glass inaugurated people literally being able to tell their glasses what to do.
Unfortunately, as the Glass QR vulnerability -- patched by Google in June -- illustrates, wearable computing faces still some tricky security and privacy questions. Furthermore, useful solutions to these problems may not yet be on hand.
One problem is user authentication. For starters, unlike a smartphone, Google Glass doesn't offer access restrictions based on passwords or a PIN. That means a thief could easily access any Google account tied to a stolen device, warns InformationWeek columnist Jerry Irvine, who's a member of the National Cyber Security Task Force. Cue the need for restricting what these "bring your own" (BYOD) devices can do, and when. "If an organization doesn't have a BYOD strategy, the emergence of Glass can be a compelling argument to get one in place," said Irvine, who's also the CIO of Prescient Solutions.
Security managers will have many more options when such devices get rolled out by the IT department and tied to being used in specific environments. For example, Duncan Stewart, a research director at Deloitte, told the BBC that wearable computers could be especially useful for workers in environments that don't currently allow for smartphone use. "Someone driving a forklift in a warehouse can't use a PC or smartphone because they will crash into someone," Stewart said. "But imagine if they can drive around and be able to pinpoint a pallet and then the particular box they need on that pallet."
There are numerous security risks that could be blocked outright in that scenario. "There's a difference between a general use computer and a specialty use computer," Bob Rosenberg, CTO of startup facilities management service BluQRux, said in a phone interview. The latter, notably, can by heavily locked down, for example to only allow a white list of approved apps to be installed, and to block access to any website except for a preapproved list.
That could eliminate the threat of users being tricked into going to malicious sites, which is a risk facing users of any computing device. "Social engineering will generally be the best way to convince people to give you passwords and money, and there's only so much technology you can put in to stop that," said Rosenberg. Then again, if attackers did begin targeting Glass users en masse with malicious QR codes, it's likely that security firms would advance new types of defenses. "If this starts being an issue, you'd start seeing blacklists in the QR readers themselves," he said.
When it comes to the ongoing challenge posed by QR codes -- attackers may link one to multiple redirects, before ending in a malicious site -- user interface changes could help better secure users. On this front, Rosenberg lauds the Windows Phone 7 interface, which offers built-in QR code scanning -- also of multiple codes at once -- then provides information related to each. "It puts a box around the QR code and shows where it goes," said Rosenberg, who earned a PhD in wearable computing in 1998 and has worked as a mobile user experience designer at Symbian and Nokia. "So if you've got six QR codes it will put six boxes and six explanations of where they go." That means a user, even in a hands-free environment, will be better informed about whether they should browse to the URL on offer.
As that suggests, many of the security problems dogging wearable computers could be fixed with user interface improvements, and by bringing BYOD polices to bear. But voice-activated wearable computing devices still remain at risk from eavesdropping. "Some things are okay, such as 'yes,' 'no,' 'do that,'" Rosenberg said. But too much of those types of voice inputs also raise the question of inappropriate social behavior, with people "bothered by you constantly piping up with random things."
On the upside, information displayed by Google Glass to a user is quite secure, unlike -- for example -- that government employee who's sitting in the airplane row ahead of you with the font size on his BlackBerry cranked up, and the screen inadvertently angled into your field of vision.
But there's a remaining, fundamental problem posed by wearable computers such as Google Glass, which automatically offload much of their processing to the cloud. "If it's recognizing the face of everyone you see, that's being uploaded, because the device isn't doing that locally," said Rosenberg. "So there are huge privacy issues."
Indeed, what's to stop the National Security Agency from automatically recording the identity of everyone that a Google Glass user sees? As always, with wearable computing automation and convenience come at least some security and privacy tradeoffs.