The short answer is yes.
I’m sure many of you have seen people with stickers over their webcams and wondered why — probably writing that person off as paranoid. But it’s well known in tech circles that a camera in a computer or smartphone can be turned on remotely by an attacker with the resources, time and motivation.
Security is hard, and our defenses are weak. The capability of an adversary to attack your devices doesn’t necessarily hinge upon a consumer choice of which computer or phone you own. Nor is it likely to matter that you think nobody is interested in you. Put another way, their reach is limited only by the tools they may have at their disposal, their motivations and, in some cases, the law.
The computer-based foundations of our modern societies are fragile. As recent evidence of this, we have seen vulnerabilities/bugs such as HeartBleed with the potential to impact the vast majority of web servers in the world, no matter how up-to-date and highly secured their operators try to keep them.
There have been attacks against Bluetooth, which can compromise virtually every device running Windows, Linux or Android in under 10 seconds, and wifi, which can compromise over a billion smartphones. We live under the constant threat of phishing attacks or malware, which hold us and our data hostage. It is a fact of life that there will be some vulnerability at any given time in the technologies we use.
The Internet of Things
The rapid spread of personal devices in recent years (2.6 billion smartphones in 2015, due to rise to 6.1 billion by 2020) means the opportunity for exploiting vulnerabilities has also grown dramatically. Many of these phones are running out-of-date operating systems and apps.
Sometimes the updates aren’t available to us because the manufacturers are refusing to maintain a product over time, forcing people to keep using insecure devices or pay again for the latest incredibly expensive device. This problem disproportionately affects those of us who cannot afford to regularly spend huge amounts of money on our technology — if you’re getting a cheap mobile phone contract with a “free” phone, the likelihood is that it’s old, outdated and unsupported stock that your carrier is trying to get rid of.
Even “new” technologies are vulnerable. We are installing what are effectively internet-connected hot (in other words, always on) microphones and surveillance cameras in our homes — in our TVs, our “personal assistants” and our game consoles.
Industry is looking to connect everything from lightbulbs, kettles and duvets to showers and cars, to the internet in one way or another — each running various sets of open and proprietary software, each designed to not just connect to each other through the path of least resistance but also to advertise their existence. We are seeing little commitment to security in these “Internet of Things” products.
Who’s Zoomin’ Who
“No one is interested in me!”
You ARE of interest to someone. One thing we know is there is a thriving black market in both easy-to-use tools that attack systems for access to their cameras, and pictures which are taken by those tools, totally unbeknownst to their users. Voyeur photos taken from webcams are everywhere, with whole websites dedicated to them. The tools themselves go for around $40 US (or are even free!). Access to a woman’s webcam commands 100 times the price of access to a man’s, but it may surprise you that the market for these webcams is so saturated that we’re talking about $1 vs $0.01.
We know that programs such as Optic Nerve from GCHQ (the UK’s electronic surveillance agency) collected video chats from millions of unsuspecting Yahoo! users around the world; three to 11 percent of the images captured were sexually explicit in nature , with seven percent containing “undesirable nudity.”
So can a government agency literally switch on any webcam they choose, without your knowledge? This question should really be broken down into three parts: is there a high likelihood the agency has the capability to do so, does the agency have the time to do so, and do we think the government has the motivation?
The answer to all of these, with increasing frequency, is yes. It was Privacy International’s case against GCHQ that led the government to avow in 2015 that they had hacking capabilities, including powers to conduct real-time surveillance, such as remotely switching on webcams. The Investigatory Powers Act, passed last year, entrenched and expanded the powers of British public bodies to hack for surveillance purposes.
With well-resourced intelligence agencies, we can never truly know what tools they have in their arsenals. We do know that they stockpile vulnerabilities, as well as the tools to exploit them. In many cases, these vulnerabilities have been effective throughout decades, and have only been patched when they’ve ultimately been stolen and released to the world. We also know that the tools to do this are easy enough to build, and we’ve tracked how poorer governments are also seeking these powers.
Assuming intelligence agencies have the tools (of which a knowledge of vulnerabilities is one) and the motivation to use said tools, all it takes is access to the relevant network — internal or internet — to provide them with a wealth of devices to play with. And as we’ve seen with the attacks through bluetooth or wifi, even if it can’t be done over the internet, all it takes is to be physically close to the person targeted to infect their device.
If the only thing needed for wide-scale hacking of webcams is the vulnerabilities, the tools to exploit them, the motivation and skills to use them and the law on your side, well…a well-resourced intelligence agency like GCHQ has each of these in spades. Domestic law enforcement agencies have some elements, too. And this is why we’re challenging this power.
We, as Privacy International, question whether it can ever be lawful to hack as a form of surveillance, and certainly not under a single untargeted warrant which can affect thousands across the globe with one stroke of a pen.
This story first appeared on Privacy International’s Medium page and is cross-posted here with permission.
Ed Geraghty is a technologist at Privacy International responsible for technical security and research, and leads the development of its security framework and tech engagement. Prior to joining Privacy International, he worked in various industries, and has been a privacy activist since 2009.