Friday 29th of March 2024

the bots are everywhere...

zutbot 123

Internet bots—those automated scripts that do everything from gathering stock prices to commandeering innocent computers to launch cyberstrikes—have recently come under attack as threatening the web, democracy and our very way of online life. During the 2016 presidential election, Russia unleashed an army of bots to troll Facebook and other sites, amplifying political division in support of Donald Trump and Bernie Sanders. Twitter reported that in May alone it found nearly 10 million bots each week. Some of these Twitter bots posed as fans to enhance the popularity of celebrities (when the bots were stricken from the rolls, former President Barack Obama, Katy Perry and Oprah suddenly became about 2 percent less popular). Ever ready to wield a legislative remedy, California is considering laws to formally define and regulate all manner of online bots.


If these bots are so terrible, why not simply outlaw them?

Well, for starters, the internet could barely function without them. Google, for example, can only index the web and present search results through the use of bots—it calls the process “Googlebot”—which it describes as a spider that crawls nearly every website on the internet, often every few seconds. Say what you will about the fairness of algorithms that order search results, but you’re not going to have much patience for a search engine that depends on humans laboriously copying data from individual websites to craft its rankings. Businesses commonly use bots to crawl and scrape the websites of competitors for real-time pricing information. But there’s another argument in favor of bots that gets far less attention, at least outside of a courtroom in Washington, D.C. And it is challenging the notion that all bots—even fake accounts—are evil.

The case unfolding in the federal district courtroom of Judge John D. Bates was filed by a group of civil rights researchers who depend upon crawling and scraping bots, along with thousands of fake accounts, to uncover persistent and pernicious discrimination—based on race, gender or age—on employment and housing platforms, and across the very web itself. In their hands, internet bots are a potentially unparalleled tool for social justice, albeit one that happens to run afoul of the terms of service of platforms like Facebook and Twitter that prohibit bots and fake accounts.

In a preliminary ruling in March, Bates held that these researchers could well enjoy a First Amendment right to create fake accounts, along with their attendant bot automation, to crawl web platforms, scrape their contents and use the data to statistically measure discrimination. He might as well have said, when it comes to bots, we must learn to tell good from bad.

If there’s such a thing as a good guy with a bot, it’s someone like Christian Sandvig. He’s one of a number of researchers in this new field of “algorithmic accountability,” and he’s a plaintiff in the pending D.C. litigation. Sandvig’s mission is to detect online discrimination in housing or employment opportunities on online platforms and on the web writ large. Do women see fewer ads for high-paying CEO jobs than men? Do white couples see ads for apartment rentals that black couples do not? Preliminary studies suggest the answer to both questions could quite possibly be yes, and often the cause might be an algorithm rather than a deliberate choice by an employer or a landlord. As Sandvig puts it, what do we do “when the algorithm itself is a racist”?

We have known for a little while now that on certain social media platforms there was a potential for people to do what people have done from time immemorial—discriminate. In a series of recent articles, ProPublica showed how Facebook allowed advertisers to discriminate on the basis of race, gender, age, or status as a parent—all categories protected by law—in placing ads for housing. In 2016 and part of 2017, an advertiser could exclude African-Americans from seeing housing ads; when Facebook fixed this setting, ProPublica reported in 2018 it was still possible to exclude based upon gender by checking a box to exclude moms with children, for example. 

But these earlier ProPublica studies showed only that one could check these boxes, not that employers, landlords or real estate agents actually were. Moreover, these studies, telling as they are, do not address the big data algorithms that now dominate advertising. They do not tell us whether, for reasons that are not directly attributable to a person, the algorithm has determined based upon past data to show CEO jobs more to men than women.

 

Read more:

https://www.politico.com/magazine/story/2018/08/14/russia-gave-bots-a-bad-name-heres-why-we-need-them-more-than-ever

 

So far there is no proof that the "bots" that "elected" Trump came for Russia (or the Kremlin — they coud have come from the USA in a detour around the planet — some suggest Macedonia) and second there is no proof that these bots ever did swing the election in favour of Trump when Hillary was fumbling and being arrogant. What helped Trump most was the Murdoch media and the evangelicals.

 

On the other subject, bots do provide "instant" knowledge...

50 shades of bots...

bitbots

Are you average in every way, or do you sometimes stand out from the crowd? Your answer might have big implications for how you're treated by the algorithms that governments and corporations are deploying to make important decisions affecting your life.

"What algorithms?" you might ask. The ones that decide whether you get hired or fired, whether you're targeted for debt recovery and what news you see, for starters.

Automated decisions made using statistical processes "will screw [some] people by default, because that's how statistics works," said Dr Julia Powles, an Australian lawyer currently based at New York University's Information Law Institute.

This is because many algorithms rely on a statistical process such as averaging, resulting in:

  • some decisions that are fair: those about people who fit the assumptions made by the algorithm's creators; but
  • other decisions that are unfair: those about people who don't fit the assumptions, people who are statistical outliers.

Dr Powles and Professor Frank Pasquale, an expert in big data and information law at University of Maryland, have been travelling around Australia discussing the topic of algorithmic accountability and highlighting the dangers of a future where use of big data and automated algorithmic decision-making goes unchecked.

Centrelink's robo-debt system was a clear example where automated algorithmic decisions are causing real harm, Dr Powles said.

"It's so bizarre to me that we're in a state where there's no requirement to prove the benefits of automation in the face of proven harms."

Automated algorithmic decisions rely on the data that goes into them. If that data is incorrect or out of date bad decisions will result — a problem Ibrahim Diallo discovered first hand when he found out he had been fired by an automated computer system that had incorrect data about him.

"The key thing to know about data-driven decisions is when you say you're making a decision on the basis of the data, you're also cutting off everything else," Professor Pasquale said.

 

Read more:

http://www.abc.net.au/news/2018-08-21/algorithmic-decisions-accountabili...

the hidden bugs in the bots...

When WikiLeaks exploded onto the scene a decade ago, it briefly seemed like the internet could create a truly open society. Since then, Big Brother has fought back. 

Every day now, we hear complaints about the growing control of digital media, often from people who apparently believe the concept was originally an unregulated free-for-all.

However, let's remember the origin of internet. Back in the 1960s, the US Army was thinking about how to maintain communications among surviving units in the event that a global nuclear war destroyed central command. Eventually, the idea emerged of laterally connecting these dispersed units, bypassing the (destroyed) center.

Thus, from the very beginning, the internet contained a democratic potential since it allowed multiple direct exchanges between individual units, bypassing central control and coordination – and this inherent feature presented a threat for those in power. As a result, their principle reaction was to control the digital "clouds" that mediate communication between individuals.

"Clouds" in all their forms are, of course, presented to us as facilitators of our freedom. After all, they make it possible for me to sit in front of my PC and freely surf with everything out there at our disposal – or so it seems on the surface. Nevertheless, those who control the clouds also control the limits of our freedom.

Hiding the remote

The most direct form of this control is, of course, direct exclusion: individuals and also entire news organizations (TeleSUR, RT, Al Jazeera etc.) can disappear from social media (or their accessibility is limited – try to get Al Jazeera on the TV screen in a US hotel!) without any reasonable explanation being given – usually pure technicalities are cited.

While in some cases (for instance, direct racist excesses) censorship is justified, it's dangerous when it just happens in a non-transparent way. Because the minimal democratic demand that should apply here is that such censorship be done in a transparent way, with public justification. These justifications can also be ambiguous, of course, concealing the true reasons.  

In Russia, you may be sent to jail for publishing things on the internet of which you actually strongly disapprove. The latest example is of Eugenia Chudnovets, a kindergarten teacher in Ekaterinburg, who was sentenced to five months in a penal colony for reposting a video showing a child being abused in a summer camp. On March 6, 2017, the conviction was overturned. Chudnovets had been convicted under an article prohibiting the "spreading, publicly demonstrating or advertising of data or items containing sexually explicit images of underage children," as she had reposted a video on a social network, showing a naked kid being abused in a children's camp in the town on Kataisk in Kurgan Region. The teacher herself explained that she could not let the flagrant incident go unnoticed – and she was right. Because it seems clear that the true reason for her conviction was not to prevent sexually explicit images of children, but to cover up the abuse going on in public institutions that is tolerated by the state.

Historical memory

However, we cannot dismiss this case as something that can only happen in oppressive Putin's Russia – we find exactly the same rationale in the first well-known case of such social media censorship, which occurred back in September 2016 when Facebook decided to remove the historical photograph of nine-year-old Kim Phuc running away from a napalm attack. Days later, following a public outcry, the image was reinstated.

Looking back, it's interesting to note how Facebook defended its decision to remove the image: "While we recognize that this photo is iconic, it's difficult to create a distinction between allowing a photograph of a nude child in one instance and not others." The strategy is clear: the general neutral moral principle (no nude children) is evoked to censor a historical reminder of the horrors of napalm bombing in Vietnam. Brought to extreme, this reasoning could be also used to justify the prohibition of the films that were shot immediately after the liberation of Auschwitz and other Nazi camps.

And, incidentally, a similar thing happened to me repeatedly two years ago when, in my conferences, I described the strange case of Bradley Barton from Ontario, Canada, who, in March 2015, was found not guilty of the first-degree murder of Cindy Gladue, an indigenous sex worker who bled to death at the Yellowhead Inn in Edmonton, having sustained an 11cm wound on her vaginal wall. The defense argued that Barton accidentally caused Gladue's death during rough but consensual sex, and the court agreed.

Yet, this case doesn't just counteract our basic ethic intuitions – a man brutally murders a woman during sexual activity, but he walks free because "he didn't mean it." Rather, the most disturbing aspect of the case is that, conceding to the demand of the defense, the judge allowed Gladue's preserved pelvis to be admitted as evidence. It was brought into court, the lower part of her torso was displayed for the jurors (incidentally, this is the first time a portion of a body was presented at a trial in Canada). Why would hard-copy photos of the wound not be enough?

Speak no evil

But my point here is that I was repeatedly attacked for my report on this case: the reproach was that by describing the case I reproduced it and thus repeated it symbolically. Although, I shared it with strong disapproval, I allegedly secretly enabled my listeners to find perverse pleasure in it.

And these attacks on me exemplify nicely the "politically correct"need to protect people from traumatic or disturbing news and images. My counterpoint to it is that, in order to fight such crimes, one has to present them in all their horror, and one has to be shocked by them.

In another era, in his preface to 'Animal Farm,' George Orwell wrote that if liberty means anything, it means "the right to tell people what they do not want to hear" – THIS is the liberty that we are deprived of when our media are censored and regulated.

We are caught in the progressive digitalization of our lives: most of our activities (and passivity) are now registered in some digital cloud that also permanently evaluates us, tracing not only our acts but also our emotional states. When we experience ourselves as free to the utmost (surfing the web where everything is available), we are totally "externalized" and subtly manipulated.

So, the digital network gives new meaning to the old slogan "personal is political." And it's not only the control of our intimate lives that is at stake: everything today is regulated by some digital network, from transport to health, from electricity to water.

And this is why the web is our most important commons today, and the struggle for its control is THE struggle of our time. And the enemy is the combination of privatized and state-controlled entities, corporations (such as Google and Facebook) and state security agencies (for example, the NSA).

The digital network that sustains the functioning of our societies, as well as their control mechanisms, is the ultimate figure of the technical grid that sustains power, and that's why regaining control over it is our first task.

WikiLeaks was just the beginning, and our motto here should be a Maoist one: Let a hundred WikiLeaks blossom.

Think your friends would be interested? Share this story!

 

Read all:

https://www.rt.com/op-ed/436648-internet-liberty-slavoj-zizek/

 

 

the raw prawns vs the golden staph...

As we chow down on prawns, crab and lobster this Christmas, Queensland scientists are waiting in the wings to get their hands on the stinky leftovers.

Key points
  • Prawn shells that normally end up in the trash are being used to heal wounds
  • The white powder kills bugs that have become resistant to antibiotics
  • Researchers say it's a game changer for ulcer sufferers, particularly diabetics

 

Australians eat over 33,000 tonnes of crustaceans each year — according to the Fisheries Research and Development Corporation — and researchers at the Queensland University of Technology (QUT) have found a way to turn the old shells into a wound healer capable of fighting antibiotic-resistant bugs. 

Trials have shown the anti-bacterial membrane, made from an active ingredient called chitosan, could fight off "golden staph", a deadly super bug that has wreaked havoc in hospitals. 

Lead researcher Dr Phong Tran said while the powder made from crustacean shells was not new, using it as an anti-microbial wound cover to combat deadly super bugs was.

Dr Tran said trials found the membrane — when supercharged with an infusion of anti-microbial agents like selenium and silver — killed methicillin-resistant golden staph.

"It has been a long time that I have been working on this and from the tests we have done we are very confident that it works very well," Dr Tran said. 

"Crustacean shells contain the second most common biopolymer on Earth, after cellulose. 

"The shells are cheap and abundant because they are normally just rubbish that you want to get rid of as fast as possible."

He said after processing the shells to remove impurities such as heavy metals, chitosan was a safe, white raw powder material.

"We dissolve [it] in an acidic solution, pour a thin liquid layer into a container and freeze to allow the chitosan to form a network," he said. 

"We take the frozen container out and neutralise all the acid to peel off a flexible membrane, which has many properties appropriate for treating wounds.

"We found cells from the wound migrate into and grow well in these highly porous membranes." 

He said once it was loaded with anti-microbial agents like the selenium, it stopped super bugs growing and spreading. 

In the laboratory at QUT's Institute of Health and Biomedical Innovation, scores of petri dishes showed how the battle was being won. 

"So I have here a petri dish full of super bugs everywhere … you see the golden colour," Dr Tran said.

"Where I have the samples you see a clear zone where no bacteria can grow.

"Clear rings around the membrane discs show where the bacteria was killed.

"It is a solid circle that literally shuts off the antibiotic-resistant super bug and stops it growing and spreading."

'Game changer' for ulcer sufferers

Dr Tran was collaborating with University of Melbourne to investigate the incorporation of more antibacterial agents in the membrane.

"Skin wounds caused by trauma or disease can sometimes be challenging to treat because of the widespread emergence of drug-resistant bacteria and fewer discoveries of new antibiotics," Dr Tran said. 

Queensland company Biomedical Innovation was also working on the project.

 

Read more:

https://www.abc.net.au/news/2018-12-24/christmas-prawn-shells-used-to-fight-superbugs-qut/

 

 

Read from top

 

In terms of the internet, more than 70 per cent of the net is "dark". That is to say it is full of crooks armed with bugs trying to infiltrate your computer or make illicit communications, steal your cash from your bank and become you — this without accounting for the "super"-dark portions of the CIA and the NSA that are trying to infect other countries' networks. Lucky there are people who are dedicated to fight the "dark net". May be they should use "raw prawns" to stink the phishing bastards out...

G. L.