flwyd: (transparent ribbon for government accoun)
The New York Times published a set of articles this week about U.S. military efforts to hack North Korean weapon systems to stymie the regime's goal of demonstrating an ICBM which could deliver a nuclear bomb to a city in the United States. The attempts so far seem to have been fairly successful, causing failures in most launch attempts of the current generation of North Korean rockets.

While the political and military tactics in this particular case and how the North Koreans may react or retaliate are fascinating, I'd like to take a step back and look at the bigger picture. The U.S. military is using "cyberweapons" and committing what one might term "cyberwarriors" in an offensive operation attacking infrastructure developed by a foreign state. This is also not a new policy: ample evidence indicates that U.S. and Israeli intelligence agencies built and deployed the Stuxnet worm to cause physical damage to nuclear centrifuges at Natanz in Iran, setting the Iranian government's nuclear ambitions back by a few years (and driving them batty in the process). The Stuxnet attack was authorized by George W. Bush and reauthorized by Barack Obama. The North Korean attacks were authorized by the Obama administration (probably the president himself) and is now under the direction of the Trump administration.

In many ways a "cyberweapon" is just a collection of software vulnerabilities packaged in a way that helps the attacker obtain information from or disrupt the operations of a particular network or computer system run by a target. A cyberwarrior and a common hacker use, by and large, the same tools–they differ primarily in motives and focus. The same vulnerability in, say, Microsoft Windows can be exploited to steal bank account credentials or to further infect an industrial control system and disrupt a nuclear fuel operation. The Stuxnet attack on Natanz and the current North Koraen attacks have had significant physical-world effects: large and dangerous objects were destroyed. Criminal and ideological hackers have thus far focused on digital payloads–stealing data, transferring money–but there's nothing preventing non-government attackers from creating havoc in the physical world, from disabling the power grid to releasing a flood of water from a dam to potentially triggering a nuclear meltdown.

Unlike the military hardware of the last century, governments, businesses, and individuals around the world generally use the same software and hardware from a small group of companies with worldwide scope. The Microsoft Windows on laptops, Android on smartphones, and firmware in Cisco routers is code that's used by people in every country. By and large, there's no "Iranian software," "North Korean software," or "American software"–with the Internet, it's all "world software." Every security bug that a military or intelligence agency discovers but doesn't disclose is one that goes unpatched by the vendor and could be exploited by another government or crime gang, harming the citizens, businesses, and government agencies that the military and spy agencies are tasked with protecting. This presents a particular problem for an organization like the NSA which have an explicit commitment to both defend American computer and communication networks while intercepting foreign communications of interest.

Physical war and military operations are, in large part, studied and debated in the open. The public and our elected representatives don't know the details of all military plans and technologies, but we're broadly aware of where military force is being used offensively and defensively and what sorts of tools and tactics we're comfortable with the military using. The country is able to have a public debate about topics like whether we should invade a country, how much money we're willing to spend to achieve military objectives, and whether we consider torture or cluster bombs to be acceptable tools of war making. (The pro-war advocates are often able to start their framing of the issues well before the public starts to engage in the debate, so we tend not to have a very effective public debate, but the opportunity is still there.)

"Cyberwar" and offensive security compromises by the government, by contrast, have been conducted under a thick veil of government secrecy. The people of the United States and our representatives in Congress have not had an opportunity to have a public debate about whether we're comfortable with the military, without declaring war, instigating digital battles with sovereign states. We haven't had a proper conversation about what sorts of cyberattacks we're comfortable and which are beyond the pale. We haven't effectively discussed the balance between concealing and weaponizing computer vulnerabilities for attack and defending our country's digital infrastructure by helping companies patch software when intelligence agencies find bugs. And we really haven't come to grips with what we'll do when a "cyberweapon" targeted at a narrow military objective is too effective and causes unintended damage elsewhere on the Internet, including to American assets. We also haven't taken an opportunity to establish treaties governing digital offensive activities, so other nations may take a cue from U.S. operations and decide that hacking is a legitimate tactic and invest in their own "cyberarmies" which attack and hurt American computer networks.

There are reasonable arguments both in favor of and opposed to offensive military hacking. Proponents can point out that a well-executed cyberattack has the potential to achieve its objective cheaper and with a lot less loss of life than a conventional military operation. Opponents can point out that a hacking operation that gets out of hand–as the Stuxnet worm did, infecting millions of computers around the world–it's a lot more work to halt than just calling off the bombers and withdrawing the troops. But so far these debates have been held quietly behind closed government doors and in memoranda stamped Top Secret. The public doesn't know what conclusions were reached and their interests were not well represented. Once Congress is done with their current project of attacking affordable health care and defunding public agencies, it would be great if they could demand that military, intelligence, and executive agencies include the American people in the conversation about the rules of engagement for offensive hacking. I'm not holding my breath, though.

Anyone interested in this topic should watch the fantastic documentary Zer0 Days. It features a detailed technical explanation of how the worm works from the security researchers who studied it (and impressive cinematic techniques to make this explanation engaging), insights from anonymous NSA employees, and some remarkably frank on-the-record interviews with government officials. I'd read the initial reporting on Stuxnet in 2010, but the movie exposed several fascinating facets of the story of which I was unaware. The interviews with government-associated figures help shape the views I expressed above, and some of the officials seem to share my position.

Bruce Schneier often writes well about this subject, too.
flwyd: (transparent ribbon for government accoun)
Obama's speech about surveillance last week featured the following paragraph which gets modern cybersecurity totally backwards:
We cannot prevent terrorist attacks or cyberthreats without some capability to penetrate digital communications, whether it's to unravel a terrorist plot, to intercept malware that targets a stock exchange, to make sure air traffic control systems are not compromised or to ensure that hackers do not empty your bank accounts. We are expected to protect the American people; that requires us to have capabilities in this field.
This train of thought made sense during the cold war. Communication systems built by and used in the Soviet Union were different than those built in the U.S. So if the NSA could simultaneously find and keep secret flaws in a Russian phone system while ensuring security flaws in American phone systems got fixed.

On the Internet, that game doesn't work anymore. Tech companies, open source groups, and standards bodies sell and distribute hardware, software, and protocols globally. Countries and companies throughout the world use the same routers, the same operating systems, and the same secure communications protocols. Every undisclosed security hole and every undetected backdoor that the NSA has at its disposal to "penetrate digital communications" is a tool that attackers have to harm the targets Obama claims the NSA is trying to protect. The stock exchanges and air traffic control systems and banks are using the same networking gear, the same database software, the same VPNs, and the same web browsers as the plotting terrorists, hacking criminals, and enemy governments.

Even if the NSA only uses their powers for good, the more "capabilities in [the digital spying] field" they have the less safe American interests are from foreign spies, criminals, and terrorists. The nation will be more secure if our communications technologies are robustly secure than if we can listen in on all the world's chatter. And by making American communications more secure, the world's communications will be more secure.
flwyd: (mail.app)
According to additional details from Snowden leaks published by The Guardian, GCHQ, the UK's counterpart to the NSA, is wiretapping all or most transatlantic cables which terminate in Britain, i.e., most traffic between Europe and the U.S.

In a sense, this sort of traffic interception is well-known in Internet security, though the scale is new. Internet traffic often travels over untrusted links, from coffee shop WiFi to backbones owned by hostile governments. Good network security design doesn't try to ensure that every step your packet takes is secure. Instead, it focuses on end-to-end security of the data, such as encrypting the transmission and requiring authentication to access hosts. Intercepts can still learn what nodes are communicating (metadata like "you went to a Google web page"), but not the content of the transmission (like the budget spreadsheet you're editing).

Given news and leaks about spy programs in the last several years, we should assume that any internet traffic is monitored. Use https (the secure web protocol) whenever possible, and complain to websites that don't support https. Assume that a government spy agency can intercept any email you send, though emails with sender and recipient on the same system (e.g. gmail to gmail) may be safe. Unfortunately, email encryption like GPG isn't easy to use for most people. For secure communication, consider using an authenticated online document editor from a company you trust, like Google Docs or Office 365. Share the document with a generic title (like "Conversation with Bob, 2013-06-22") and type your message. I believe this approach is more robust to intercept-style snooping than email or phone conversations. However, a saved document (like an email) can be subpoenaed in an investigation or court case and can be read by anyone who gets your account credentials, like a hacker or a spy agency that installed a keylogger on your account.

The first filter immediately rejects high-volume, low-value traffic, such as peer-to-peer downloads, which reduces the volume by about 30%. Others pull out packets of information relating to "selectors" – search terms including subjects, phone numbers and email addresses of interest. Some 40,000 of these were chosen by GCHQ and 31,000 by the NSA. Most of the information extracted is "content", such as recordings of phone calls or the substance of email messages. The rest is metadata.
GCHQ taps fibre-optic cables for secret access to world's communications, The Guardian, 2013-06-21
flwyd: (transparent ribbon for government accoun)
As you may have noticed, people on the Internet are upset about new TSA security measures. Broadly, there seem to be three objections:
1: "I don't want to be exposed to radiation."
2: "I don't want government employees to see me naked."
3: "I don't want government employees to touch me."

In general, these are all valid concerns, but to me the current volume and hyperbole seem overblown. I have yet to fly with them in place, though, so I don't want to make any firm claims. However, in preparation for my trip to New York City on Monday, I found the TSA's Advanced Imaging Technology FAQ:
Q. What has TSA done to protect my privacy?
A. TSA has implemented strict measures to protect passenger privacy, which is ensured through the anonymity of the image. A remotely located officer views the image and does not see the passenger, and the officer assisting the passenger cannot view the image. The image cannot be stored, transmitted or printed, and is deleted immediately once viewed. Additionally, there is a privacy algorithm applied to blur the image.
So objection #2 is pretty silly: the person who "sees you naked" doesn't also get to see the fully-clothed face-in-tact you. So arguably they'll be looking at naked pictures, but they'll have no way to know it's you. Even if the images aren't deleted, there's no record of who went through which security line when, so it's just an anonymous human body. And after several hours a day of looking at "naked" images, these screeners are not going to be in any way aroused by the fuzzy monochrome body parts of American travelers. There's far higher quality naked pictures of more attractive people doing sexually suggestive things available for free on the Internet.

Another nugget from the FAQ in regards to concern #1:
Backscatter technology projects an ionizing X-ray beam over the body surface at high speed. The reflection, or “backscatter,” of the beam is detected, digitized and displayed on a monitor. Each full body scan produces less than 10 microREM of emission, the equivalent to the exposure each person receives in about 2 minutes of airplane flight at altitude. …
Millimeter wave technology bounces harmless electromagnetic waves off of the human body to create a black and white image. It is safe, and the energy emitted by millimeter wave technology is thousands of times less than what is permitted for a cell phone.
So yes, you receive some harmful X-ray radiation while being scanned. But it's orders of magnitude less than the radiation you receive by actually flying on the plane you're about to board. Radiation exposure is a valid concern and you wouldn't want to walk through one of these several times a day, but avoiding the scan before you get on a plane is like refusing a breath mint after Thanksgiving dinner because you're worried about its calories.

The third objection is a touchy subject. [livejournal.com profile] elusis has pointed out that women, minorities, and transgendered people have been uncomfortable with airport pat-downs for years, but it's a big deal now because suddenly an able-bodied cisgender white man is the one who was complaining about the government touching their dicks. I can sympathize with folks with an adverse reaction to people touching them, but I wonder what they do when they're sitting in a window seat and need to go to the bathroom, surfing over the laps of the two people in their row and sliding past the flight attendants. And it's not like pat-downs are a new thing, they're just doing a more thorough job.

I'm not trying to be a TSA apologist, I'm just trying to keep things in proportion. The whole airport ritual is absurdist security theater worthy of ridicule by Franz Kafka. That they could say "I'm sorry sir, but that's too much toothpaste" is an illustration that it's a human computer with a rather inelegant program. They've got Eric Schmidt's vision backwards. He says "Computers will clearly handle the things we aren’t good at, and we will handle the things computers clearly aren’t good at." But the TSA has humans implementing strict sets of rules (which computers are great at) and not making judgement calls about social situations (which computers are bad at).

I hope this episode will generate enough momentum to change the American approach to airport screening so that it's both more efficient and more secure. But it feels more like a hangover from all the tea partying, which quickly went from "Giving billions of dollars to major banks is unjust!" to "Let's bring a Republican majority back to Washington!"

DIA has over twice as many metal detectors as imaging scanners, so it should be possible to pick which screening technology you get. I might ask for a grope, just to see how intimate it really is.
flwyd: (bad decision dinosaur)
This week, Adobe announced a critical vulnerability in Acrobat Reader. This isn't the only critical bug discovered in Acrobat in the past year; hackers seem to have devoted significant attention to it lately. I recommend using a non-Adobe PDF reader like Preview on MacOS, evince on *nix, and Google Docs on the web. And be suspicious of PDF attachments you weren't expecting.
flwyd: (bad decision dinosaur)
I just read Bruce Schneier's post about the Boston Police wanting semi-automatic assault rifles. Police in Guatemala and Honduras carry semi-automatics (not to mention security guards at banks, department stores, and McDonalds with shotguns), and it doesn't really make me feel safe. Sure, it might deter armed robbers from attacking tourists, but I'm more worried about what they do while nothing exciting is happening. I feel rather uncomfortable when I'm sitting next to a police officer eating lunch with an AK-47 across his lap. Molly chewed out a guard sitting on the steps outside a bank, twiddling with his rifle´s safety. We shook our heads in disbelief as a uniformed army soldier used his assault rifle as a walking stick. I get nervous when the guy guarding an appliance store doesn´t take his finger off the trigger. I'm sure the Boston Police have better gun safety training than Guatemalan security guards, but I don't think the plan improves public safety.

But hey, at least the Guatemalans are using the presents we sent them in the '80s...
flwyd: (octagonal door and path)
The Internet is full of good blogs and you've got plenty to read already I'm sure, but you should consider adding [livejournal.com profile] bruce_schneier to your regular reads. An expert in security (particularly in computing), his posts are regularly well-written pieces that point out subtle ways in which we (society) think about security. His is an important voice in the face of "security theater" performed by governments and others. Plus, on Fridays he blogs about squid. Even if you don't decide to read him regularly, make sure to read today's post on The Feeling and Reality of Security.

Also make sure to check out the Bruce Schneier Facts. He could beat Chuck Norris!

New Public Key

Friday, October 19th, 2007 11:41 pm
flwyd: (daemon tux hexley)
I set up a GPG public key in February, but realized tonight that since I haven't used it in eight months I have no idea what the passphrase is. My mind went back to some ideas I came up with at the time, but none were winners. Since nobody's ever sent me any encrypted mail, I threw the old key away and made a new one. Feel free to send me any email, trivial or super-secret, using this key.

Congress and the White House want to give retroactive immunity to telecomm companies which break the law and divulge your personal communications. If all mail you send is encrypted, the NSA can't assume that it's important just because it's encrypted.

My public key is also on my website: http://trevorstone.org/publickey.html
If you ever note a discrepancy in public keys or want to be sure you have the right key, contact me personally, verify my identity, and get my digital fingerprint.

Version: GnuPG v1.4.6 (Darwin)


Nominally Funny

Friday, February 16th, 2007 09:54 am
flwyd: (daemon tux hexley)
Ubuntu's security update this morning includes a fix to ImageMagick with the comment
SECURITY UPDATE: overflow in PALM reading.
Magick indeed.
flwyd: (spiral staircase to heaven)
Due in part to excessive personal hygene, [livejournal.com profile] tamheals was running quite late for her flight to Orlando this morning. When we hit traffic on I-70, I informed her that she would not arrive at DIA in time to check her luggage. After a rough decision about high heeled shoes, I consolidated her luggage down to one carry-on bag and one personal item, the latter containing her toiletries bag containing essential oils bottles, a tube of toothpaste, and small bottles of shampoo.

Speeding on Peña Blvd. didn't make up enough time, so she realized in the security line that she wouldn't catch her flight. When she got to the checkpoint, they informed her that her toothpaste and shampoo exceeded the allowed 3 oz maximum.

The purported purpose of draconian measures at airport security checkpoints is to make sure that flights are safe because nobody aboard the plane is carrying anything dangerous. However, the rule is not "Passengers may not bring dangerous items on board." Instead, there is a long list of specific items which are not allowed. I ca nunderstand not letting passengers take an ice pick on board (clearly dangerous). But they didn't say "Sorry, ma'am, you're not allowed to bring shampoo and toothpaste on the plane." They said "Sorry, ma'am, you're not allowed to bring this much shampoo and toothpaste on the plane." (I believe she even left her 2.5oz of personal lubricant at home.)

There are four possible conclusions to draw from this:

  • There is a way to hijack an airplane with five ounces of shampoo and five ounces of toothpaste (and a dash of eye medicine and personal lubricant) which is not possible with three ounces of shampoo and three ounces of toothpaste.
  • The TSA has no idea what can blow up an airplane, but some Muslims in England came up with a (well-shampooed) hair brained idea involving unknown liquids, so the TSA is throwing chemistry to the wind assuming any liquid in sufficient quantities can blow up an airplane.
  • The TSA is well aware that you can't blow up a plane with shampoo, but wants to hassle you anyway for political purposes.
  • The TSA doesn't trust its employees to make decisions based on the gestalt of a traveller's luggage, so they devise a spaghetti code security policy in the hopes that any terrorists get entangled in mindless rule enforcement.

flwyd: (glowing grad macky auditorium)
As many people suspected, Princeton has shown that it's really easy to hack a Diebold voting machine. Like many corporations, Diebold prefers to deal with technical problems (faulty design) with non-technical tools. One tool is ignoring the problem. Another tool is attacking critics. A third tool is keeping the design secret so that nobody knows how bad the system is. An example of the first two tools follows.

"[Our critics are] throwing out a 'what if' that's premised on a basis of an evil, nefarious person breaking the law," Bear told Newsweek after the March Emery County study. "For there to be a problem here," he further explained to the New York Times, "you're basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software … I don't believe these evil elections people exist." -- Hack the Vote? No Problem on Salon.com

A fundamental technique of secure system design involves playing a malicious party. At point of entry in the system, this party gets to say "What if this happens here?" I'll bet companies that make slot machines don't answer that question "Surely there's nobody so evil to do that!" That's right. Your gambling rights are better protected and enforced than your voting rights.

Pfear of Pfliers

Thursday, August 17th, 2006 05:28 pm
flwyd: (guarded cross santa fe church ironwork)
I'll be flying to Norway, via Newark, next Wednesday. My bad timing will probably prevent me from taking a water bottle to maintain hydration during the hours upon hours in the air. But, as the register points out, it's quite hard to blow up a plane with liquids. So much for my hopes of international travel with only carry-on luggage. At least I, unlike my dad, don't have to stop over in Heathrow...

It's also interesting to note that the authorities raised the threat level when they had busted up the terrorist plot they'd been investigating for a year. Since when does arresting criminals make the streets more dangerous? Also, if they've been tracking these guys for a year, was there any danger they'd even get on a plane? Surely their purchase of an airline ticket would set off all sorts of bells.

Fun fact: the Department of Homeland Security website shows a Yellow threat level icon in the top right corner, but prominantly announces that the U.S. threat level remains at ORANGE...

Incidentally, if you wish to contact me between 8/23 and 9/6, send email to trevorstone (a) gmail.com. My usual @trevorstone.org address gets too much spam to check at a glance and I haven't bothered to set up a webmail interface yet anyay.
March 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 2017

Most Popular Tags

Expand Cut Tags

No cut tags


RSS Atom
Page generated Saturday, March 25th, 2017 01:38 pm
Powered by Dreamwidth Studios