Friday 29th of March 2024

war going to the rich dogs...

dogsdogs

So hey they’ve started mounting sniper rifles on robodogs, which is great news for anyone who was hoping they’d start mounting sniper rifles on robodogs.

At an exhibit booth in the Association of the United States Army’s annual meeting and exhibition, Ghost Robotics (the military-friendly competitor to the better-known Boston Dynamics) proudly showed off a weapon that is designed to attach to its quadruped bots made by a company called SWORD Defense Systems.

 

BY Caitlin Johnstone

 

“The SWORD Defense Systems Special Purpose Unmanned Rifle (SPUR) was specifically designed to offer precision fire from unmanned platforms such as the Ghost Robotics Vision-60 quadruped,” SWORD proclaims on its website. “Chambered in 6.5 Creedmoor allows for precision fire out to 1200m, the SPUR can similarly utilize 7.62×51 NATO cartridge for ammunition availability. Due to its highly capable sensors the SPUR can operate in a magnitude of conditions, both day and night. The SWORD Defense Systems SPUR is the future of unmanned weapon systems, and that future is now.”

Back in May the U.S. Air Force put out a video on the “Robotic Ghost Dog” with which these weapons are designed to be used, showing the machines jogging, standing up after being flipped over and even dancing. All of which becomes a lot less cutesy when you imagine them performing these maneuvers while carrying a gun designed to blow apart skulls from a kilometer away.

At one point in the video a senior master sergeant explains to the host how these robodogs can be affixed with all kinds of equipment like communications systems, explosive ordnance disposal attachments, gear to test for chemicals and radiation, and the whole time you’re listening to him list things off you’re thinking “Guns. Yeah guns. You can attach guns to them, why don’t you just say that?”

The SPUR prototype is just one of many different weapons we’ll surely see tested for use with quadruped robots in coming years, and eventually we’ll likely see its successors tested on impoverished foreigners in needless military interventions by the United States and/or its allies.

They will join other unmanned weapons systems in the imperial arsenal like the USA’s notorious drone program, South Korea’s Samsung SGR-A1, the Turkish Kargu drone which has already reportedly attacked human beings in Libya without having been given a human command to do so, and the AI-assisted robotic sniper rifle that was used by Israeli intelligence in coordination with the U.S. government to assassinate an Iranian scientist last year.

And we may be looking at a not-too-distant future in which unmanned weapons systems are sought out by wealthy civilians as well.

 

In 2018 the influential author and professor Douglas Rushkoff wrote an article titled “Survival of the Richest” in which he disclosed that a year earlier he had been paid an enormous fee to meet with five extremely wealthy hedge funders. Rushkoff says the unnamed billionaires sought out his advice for strategizing their survival after what they called “the event,” their term for the collapse of civilization via climate destruction, nuclear war or some other catastrophe which they apparently viewed as likely enough and close enough to start planning for.

 

Rushkoff writes that eventually it became clear that the foremost concern of these plutocrats was maintaining control over a security force which would protect their estates from the rabble in a post-apocalyptic world where money might not mean anything. I encourage you to read the following paragraph from the article carefully, because it says so much about how these people see our future, our world, and their fellow human beings:

 

“This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? The billionaires considered using special combination locks on the food supply that only they knew. Or making guards wear disciplinary collars of some kind in return for their survival. Or maybe building robots to serve as guards and workers — if that technology could be developed in time.”

 

Something to keep in mind if you ever find yourself fervently hoping that the world will be saved by billionaires.

LinkedIn cofounder Reid Hoffman has said that more than half of Silicon Valley’s billionaires have invested in some type of “apocalypse insurance” such as an underground bunker to ensure they survive whatever disasters ensue from the status quo they currently benefit so immensely from.

The New Yorker has published an article about this mega-rich doomsday prepper phenomenon as well. We may be sure that military forces aren’t the only ones planning on having eternally loyal killing machines protecting their interests going forward.

We are ruled by warmongers and sociopaths, and none of them have healthy plans for our future. They are not kind, and they are not wise. They’re not even particularly intelligent. Unless we can find some way to pry their fingers from the steering wheel of our world so we can turn away from the direction we are headed, things will probably get very dark and scary.

 

 

Caitlin Johnstone is a rogue journalist, poet, and utopia prepper who publishes regularly at Medium.  Her work is entirely reader-supported, so if you enjoyed this piece please consider sharing it around, liking her on Facebook, following her antics on Twitter, checking out her podcast on either YoutubesoundcloudApple podcasts or Spotify, following her on Steemit, throwing some money into her tip jar on Patreon or Paypal, purchasing some of her sweet merchandise, buying her books Notes From The Edge Of The Narrative MatrixRogue Nation: Psychonautical Adventures With Caitlin Johnstone and Woke: A Field Guide for Utopia Preppers.

deadly on automatic...

 

by Jack Crawford

At the outbreak of World War I, the French army was mobilised in the fashion of Napoleonic times. On horseback and equipped with swords, the cuirassiers wore bright tricolour uniforms topped with feathers—the same get-up as when they swept through Europe a hundred years earlier. The remainder of 1914 would humble tradition-minded militarists. Vast fields were filled with trenches, barbed wire, poison gas and machine gun fire—plunging the ill-equipped soldiers into a violent hellscape of industrial-scale slaughter.

Capitalism excels at revolutionising war. Only three decades after the first World War I bayonet charge across no man’s land, the US was able to incinerate entire cities with a single (nuclear) bomb blast. And since the destruction of Hiroshima and Nagasaki in 1945, our rulers’ methods of war have been made yet more deadly and “efficient”.

Today imperialist competition is driving a renewed arms race, as rival global powers invent new and technically more complex ways to kill. Increasingly, governments and military authorities are focusing their attention not on new weapons per se, but on computer technologies that can enhance existing military arsenals and capabilities. Above all is the race to master so-called artificial intelligence (AI).

From its earliest days, military priorities have guided AI research. The event widely considered as marking the foundation of AI as a discipline—a workshop at Dartmouth College in 1956—was funded partly by the US military’s Office of Naval Research. The close military ties continued in the following decades. As the historian of computer science Paul Edwards explained in his book The Closed World: “As the project with the least immediate utility and the farthest-reaching ambitions, AI came to rely unusually heavily on ARPA [Defense Advanced Research Projects Agency] funding. As a result, ARPA became the primary patron for the first twenty years of AI research”.

AI is a slippery concept. None of the achievements in the field in the 20th century actually involved independently “intelligent” machines. Scientists simply found ways to computerise decision-making using mathematics. For many decades AI remained either stuck in the realm of theory or limited to the performance of narrow algorithmic tasks. While a machine could beat a chess world champion in 1997, it took until 2015 before a machine could defeat a human champion in the Chinese strategy game Go. AI systems have long struggled to perform more complex, intuitive tasks irreducible to abstract logic.

Nonetheless, the 21st century has seen an explosion of interest and investment in AI. The term AI remains a catchall for anything that impressively uses algorithms. But machine learning has definitely become more advanced, and the data sets from which they learn vaster. Universities around the world are experimenting in “deep learning”—the process where machines, designed to replicate neural networks in the human brain, embark on intelligent self-driven learning. The field appears to have entered a golden age, which may still be in its infancy.

The AI “boom” is centred, as before, on the military. A recent report from Research and Markets forecast that the global defence industry market for AI and robotics will grow to a value of US$61 billion in 2027, up from US$39 billion in 2018 (a cumulative expenditure of US$487 billion in ten years). This market growth is being driven by massive spending from major nation states, above all the US but including countries like China, Israel, Russia, India, Saudi Arabia, Japan and South Korea.

The big sellers in this market include the usual suspects—defence contractors such as Lockheed Martin, Boeing, Raytheon, Saab and Thales—as well as major technology and computing companies such as Microsoft, Apple, Facebook, Amazon and Alphabet (Google’s parent company). Indeed, the new AI arms race has seen increasingly close links forged between the California tech giants and the Pentagon. Amazon’s new CEO Andy Jassy sits on the National Security Commission on Artificial Intelligence (NSCAI), which is advising on the rollout of AI in the US armed forces. Jassy’s fellow NSCAI commissioners include Microsoft executives, and the former Alphabet CEO Eric Schmidt.

What, then, do militaries plan to do with the AI they’re spending billions on developing? There are three main areas of interest: lethal autonomous weapons (also known as “killer robots”), cyber-attacking software, and surveillance and tracking systems.

The first category is what it sounds like: take existing weapons and make them autonomous. This is a military strategist’s dream—to have machines engage targets with minimal human direction or supervision. The guiding principle is “force multiplication”—that is, helping a smaller number of people achieve a greater level of destruction.

Under the Obama administration, unmanned (albeit remotely controlled) drones became a key tool of US force and intimidation in the Middle East and South Asia. Edward Snowden’s leaks in 2013 revealed how global metadata surveillance allowed drones to geolocate the SIM card of a suspect, then track and strike whoever held the device. “We kill people based on metadata,” admitted US General Michael Hayden. Or as one National Security Agency unit’s motto put it: “we track ‘em, you whack ‘em”.

But even this technology seems crude compared to the newer US drones. Frank Kendall, Secretary of the US Air Force, announced in September 2021 that the Air Force had recently “deployed AI algorithms for the first time to a live operational kill chain”. While the details of the operation remain secret, Kendall has boasted that AI’s provision of “automated target recognition” would “significantly reduce the manpower-intensive tasks of manually identifying targets—shortening the kill chain and accelerating the speed of decision-making”. A complex algorithm, which learns from other drones’ “experiences”, could allow a drone to independently perform all stages of the “kill chain”—identifying the target, dispatching force to the target, deciding to kill the target, and finally physically destroying the target.

A new age of weaponry was also signalled by Mossad’s use of a semi-autonomous satellite-operated machine gun to assassinate Iran’s top nuclear physicist Mohsen Fakhrizadeh last year. The software used in the assassination was able, reportedly, to recognise the Fakhrizadeh’s face, with none of the bullets hitting his wife seated inches away.

The main global militaries are now trying to introduce autonomous elements to almost all equipment—tanks, submarines, fighter jets and more. The new AUKUS pact, for example, involves more than just nuclear submarines. There is also talk of AI, quantum technology, hypersonic missiles, cyber weapons, undersea drones and “smart” naval mines. Marcus Hellyer of the Australian Strategic Policy Institute envisions a set up where a “mother ship” with a human crew can deploy fleets of smaller, autonomous drones to launch attacks on rival vessels.

Recently, one weapon in particular has climbed to the top of every military’s wish list: drone swarms. In this case, a large number of small drones act in unison, replicating the swarming patterns of birds and bees. It is frighteningly easy to program each participating drone to follow simple guidelines (towards motion coherence but against collision), allowing the swarm to act as a swift and unpredictable whole. These swarms, the subject of the 2017 short film Slaughterbots, are rapidly becoming a reality. Aside from the US and China’sswarm programs, 2021 has seen testing and development in Spain’s RAPAZ program, Britain’s Blue Bear, France’s Icarus Swarms, Russia’s Lightning swarms, and Indian Air Force projects.

The first deployment of a drone swarm in combat was in Libya in March 2020, when Turkish attack drones were able to autonomously hunt and kill rebels in Tripoli. In May this year, Israel used a drone swarm in Gaza to find and attack Hamas fighters.

Aside from its use in lethal autonomous weapons, AI is central to the emergent frontiers of cyber conflict. In 2009, a computer worm called Stuxnet (of joint US-Israeli origin) was able to find its way into software controlling Iran’s uranium enrichment facilities. Hiding its own tracks, the worm searched for and attacked a specific piece of code, causing uranium centrifuges to spin out of control and destroy themselves.

Stuxnet sparked a cyber arms race. As Michael Webb, director of Adelaide University’s Defence Institute, told the Sydney Morning Herald, the world’s militaries are now “fighting for supremacy in cyber”. The first acts of the next major war might involve the paralysis of satellites, radar systems or electricity grids. Nations are placing their cyber defences in the control of AI systems, which can react and retaliate to cyber threats faster than any human can. As Webb explains, “Many of the cyber attacks that are the hardest to combat use AI already”.

Imperialist military rivalry has always involved dangerous guessing games. The Cuban Missile Crisis is a classic example, where neither side could confidently know the intentions of its rival. The Soviet submarine B-59—too deep underwater to pick up radio broadcasts—almost launched its nuclear torpedo as the captain believed war had broken out. The possibilities of cyber warfare intensify such dangers—as belligerents are forced to guess what an AI-guided virus might target. If a nuclear-armed state feels vulnerable in cyberspace, and fears the targeting of its nuclear weapons systems, it may choose to deploy such weapons sooner rather than later.

The final AI capacity that militaries are interested in is mass surveillance. This is probably the most developed area to date. Already, AI is being used to scan and analyse footage from millions of drones and CCTV cameras around the world—searching for patterns and tracking particular faces at a scale and speed that no human can. Particularly when combined with the potential use of autonomous weapons, drone swarms and so on, AI-enhanced surveillance has a worrying potential to assist military attacks—whether on those deemed “enemy combatants” on an international or domestic level.

Many scientists have warned against AI’s use by militaries, and particularly autonomous weapons. Thousands of AI and robotics researchers have signed an open letter, initiated in 2015, demanding a UN ban on killer robots to avoid a global AI arms race.

Unfortunately, this arms race is already well underway. The Pentagon is in a self-described “sprint” to catch up with China’s AI capabilities. In July, US Defense Secretary Lloyd Austin announced a new US$1.5 billion investment to accelerate the military adoption of AI over the next five years. A new sense of urgency was added in September, when the Pentagon’s chief software engineer Nicolas Chaillan resigned in protest, arguing that the US was losing the AI arms race.

Despite the rapid development of AI technology in the past decade, it’s important to note that it still has major limits. Robots are still far from rivalling the abilities of human brains. They often fail to understand new problems or circumstances foreign to their coding or prior “experience”. Autonomous systems used by militaries today almost always involve humans in decisions to target and kill. There are big question marks about how much trust authorities are prepared to place in autonomous systems.

Nobody knows exactly how AI could shape a future war. But we shouldn’t expect the fully-automated conflict imagined in some science fiction. There is a good reason why China has built the world’s largest standing army, and why fully-staffed US submarines and aircraft carriers surround China’s coast.

Nor does new technology make military hierarchies all-powerful or immune to insubordination. It can introduce new instabilities, as well as avenues for resistance. We got a small glimpse of this in 2018, when thousands of Google workers protested against the company’s participation in the Pentagon’s Project Maven, which used AI to interpret drone footage and assist air strikes. Their rebellion forced Google to drop the project.

AI enthusiasts point to the immense potential of this technology to help create a more productive economy and a healthier society. But the promise of AI will be limited so long as its development occurs in the context of the violent and destructive dynamics of capitalism. Today, a large part of AI research and development is bound-up with the projects of competing military regimes. It is used above all for its destructive—not progressive—potential. Its development in the hands of imperialists should be opposed by socialists, just like the nuclear arms race of the 1950s and ‘60s.

 

Read more:

https://redflag.org.au/index.php/article/algorithms-war-military-plan-artificial-intelligence

 

Read from top.

 

 

FREE JULIAN ASSANGE NOW !!!!!!!!!!!!!!!