While the political and military tactics in this particular case and how the North Koreans may react or retaliate are fascinating, I'd like to take a step back and look at the bigger picture. The U.S. military is using "cyberweapons" and committing what one might term "cyberwarriors" in an offensive operation attacking infrastructure developed by a foreign state. This is also not a new policy: ample evidence indicates that U.S. and Israeli intelligence agencies built and deployed the Stuxnet worm to cause physical damage to nuclear centrifuges at Natanz in Iran, setting the Iranian government's nuclear ambitions back by a few years (and driving them batty in the process). The Stuxnet attack was authorized by George W. Bush and reauthorized by Barack Obama. The North Korean attacks were authorized by the Obama administration (probably the president himself) and is now under the direction of the Trump administration.
In many ways a "cyberweapon" is just a collection of software vulnerabilities packaged in a way that helps the attacker obtain information from or disrupt the operations of a particular network or computer system run by a target. A cyberwarrior and a common hacker use, by and large, the same tools–they differ primarily in motives and focus. The same vulnerability in, say, Microsoft Windows can be exploited to steal bank account credentials or to further infect an industrial control system and disrupt a nuclear fuel operation. The Stuxnet attack on Natanz and the current North Koraen attacks have had significant physical-world effects: large and dangerous objects were destroyed. Criminal and ideological hackers have thus far focused on digital payloads–stealing data, transferring money–but there's nothing preventing non-government attackers from creating havoc in the physical world, from disabling the power grid to releasing a flood of water from a dam to potentially triggering a nuclear meltdown.
Unlike the military hardware of the last century, governments, businesses, and individuals around the world generally use the same software and hardware from a small group of companies with worldwide scope. The Microsoft Windows on laptops, Android on smartphones, and firmware in Cisco routers is code that's used by people in every country. By and large, there's no "Iranian software," "North Korean software," or "American software"–with the Internet, it's all "world software." Every security bug that a military or intelligence agency discovers but doesn't disclose is one that goes unpatched by the vendor and could be exploited by another government or crime gang, harming the citizens, businesses, and government agencies that the military and spy agencies are tasked with protecting. This presents a particular problem for an organization like the NSA which have an explicit commitment to both defend American computer and communication networks while intercepting foreign communications of interest.
Physical war and military operations are, in large part, studied and debated in the open. The public and our elected representatives don't know the details of all military plans and technologies, but we're broadly aware of where military force is being used offensively and defensively and what sorts of tools and tactics we're comfortable with the military using. The country is able to have a public debate about topics like whether we should invade a country, how much money we're willing to spend to achieve military objectives, and whether we consider torture or cluster bombs to be acceptable tools of war making. (The pro-war advocates are often able to start their framing of the issues well before the public starts to engage in the debate, so we tend not to have a very effective public debate, but the opportunity is still there.)
"Cyberwar" and offensive security compromises by the government, by contrast, have been conducted under a thick veil of government secrecy. The people of the United States and our representatives in Congress have not had an opportunity to have a public debate about whether we're comfortable with the military, without declaring war, instigating digital battles with sovereign states. We haven't had a proper conversation about what sorts of cyberattacks we're comfortable and which are beyond the pale. We haven't effectively discussed the balance between concealing and weaponizing computer vulnerabilities for attack and defending our country's digital infrastructure by helping companies patch software when intelligence agencies find bugs. And we really haven't come to grips with what we'll do when a "cyberweapon" targeted at a narrow military objective is too effective and causes unintended damage elsewhere on the Internet, including to American assets. We also haven't taken an opportunity to establish treaties governing digital offensive activities, so other nations may take a cue from U.S. operations and decide that hacking is a legitimate tactic and invest in their own "cyberarmies" which attack and hurt American computer networks.
There are reasonable arguments both in favor of and opposed to offensive military hacking. Proponents can point out that a well-executed cyberattack has the potential to achieve its objective cheaper and with a lot less loss of life than a conventional military operation. Opponents can point out that a hacking operation that gets out of hand–as the Stuxnet worm did, infecting millions of computers around the world–it's a lot more work to halt than just calling off the bombers and withdrawing the troops. But so far these debates have been held quietly behind closed government doors and in memoranda stamped Top Secret. The public doesn't know what conclusions were reached and their interests were not well represented. Once Congress is done with their current project of attacking affordable health care and defunding public agencies, it would be great if they could demand that military, intelligence, and executive agencies include the American people in the conversation about the rules of engagement for offensive hacking. I'm not holding my breath, though.
Anyone interested in this topic should watch the fantastic documentary Zer0 Days. It features a detailed technical explanation of how the worm works from the security researchers who studied it (and impressive cinematic techniques to make this explanation engaging), insights from anonymous NSA employees, and some remarkably frank on-the-record interviews with government officials. I'd read the initial reporting on Stuxnet in 2010, but the movie exposed several fascinating facets of the story of which I was unaware. The interviews with government-associated figures help shape the views I expressed above, and some of the officials seem to share my position.
Bruce Schneier often writes well about this subject, too.