Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­—using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­—and almost entirely unregulated ­—data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies—­ and governments ­—to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­—passed in Vermont in 2018 ­—that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations—and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on January 27, 2020 at 12:21 PM40 Comments

Comments

John Carter January 27, 2020 1:39 PM

In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

Where does one start and the other end?

As they say…. America has the best democracy money can buy….

jdgalt January 27, 2020 1:40 PM

I would much rather see the law compel clearview.ai to sell its wares to you and me. It would be very useful in proving, to media and maybe to courts, when political protests are being infiltrated or attacked under false flags by the likes of Antifa or Soros’s rent-a-rioters.

Some Prole January 27, 2020 1:45 PM

While discrimination is immoral (and illegal), in the age of “unconscious bias training” it is just another label being misused as part of ongoing culture wars. These “discriminating” algorithms are just another way to get around being forced to meet diversity quotas and similar anti-meritocratic practices.

Wu Jin Han January 27, 2020 2:53 PM

Ubiquitous surveillance is what we are talking about, whatever means of observation is used.

Only terrible outcomes are possible. The thinking person must read parts of Solzhenitsyn’s Gulag Archipelago, and to contemplate what the Peronistas did in Argentina, or Jay Edgar Hoover to Dr. King.

Stasi colonel Wolfgang Schmidt’s forecast, “It is the height of naivete to think that once collected this information won’t be used… The only way to protect people’s privacy is to not allow government to collect their information in the first place.”

Now, just what are the mechanisms we should wisely use to restrict the collection of our data?

The Bill of Rights is a beautiful document. Could we enshrine some new amendments that would answer the needs of the 21st century?

The Bold Beranek and Newman guys, or Jeff Postel, were just amazed they could make the newtwork function at all. No privacy was built into the original internet/networking protocols.

Besides the legal changes to protect us from ubiquitous surveillance, we should also rework the way the internet works. I like MinimalT. This is the way it should have been designed.

(And, the original engineers who developed modern networking technology should feel no dishonor. They did great work, but if we don’t make substantive changes, we will suffer.)

https://cr.yp.to/tcpip/minimalt-20130522.pdf

“MinimaLT provides the features of TCP/IP (reliability, flow control, and congestion control), and adds in encryption, authentication, clean IP mobility, and DoS protections, all while preserving PFS and reducing latency costs.”

David Leppik January 27, 2020 3:19 PM

Completely forgotten in this whole debate is the fact that cell phones are inherently a tracking device. It’s not clear that we could design a cell phone that doesn’t know where you are at all times and still route calls to you in realtime.

Sancho_P January 27, 2020 4:01 PM

@David Leppik

No way!
Please don’t forget the children that could be lost, terrorism, CP, war on drugs+, automatic car crash notification, down to stolen mobile devices.
😉

65535 January 27, 2020 4:11 PM

“…the San Diego Police Department that used the system the most, which the department says reflects the size of its police force (over 1,900 officers)…” – ht tps://www.fastcompany[.]com/90440198/san-diegos-massive-7-year-experiment-with-facial-recognition-technology-appears-to-be-a-flop

Technology can be as a club – or weapon. It can easily be used or misused. This “Facial recognition” software is another example of the government creating a one-way mirror to study citizens – yet the citizens cannot use Facial Recognition to study the government down to the individual police officer.

Let’s turn the situation around. Let the citizens photograph all 1,900 officers of the San Diego Police Department so they can instantly recognize each and every police officer and call them by their first name. After all the police are supposed Serve and Protect. The police are supposed to be responsive to the public – who eventually pay their salaries.

The citizens of San Diego would benefit from the information on their police officers and get to know each police officer, become friends and greet them on the street. What is wrong with knowing each face of your local police department?

Would the police officers of the San Diego police department enjoy that type of scrutiny? Maybe and maybe not.

Let’s, then expand the “Facial recognition” project to the San Diego Sheriff’s department – maybe to the Department of Homeland Security.

Would not that be good? Probably not. Further, if the Diego Facial Recognition accuracy is like the quality of Microsoft’s updates we are in trouble.

The problem is obvious – privacy and other American laws. When technology is wrongly taken to extremes a large number of unintended consequences arise.

Some, consequences will not be seen for years. Technology should not be used to extremes. It should be used cautiously. Clearly, there needs to be boundaries.

Lastly, better budget boundaries would go a long way toward solving legal, privacy and useless spending problems. The budget money should have been put to better use.

Sancho_P January 27, 2020 4:13 PM

”… data broker industry … like G & F make their money …” (@Bruce)

Wait until they can correlate how much we buy because of their useless ads.
This may shut down our whole economy in the blink of an eye.

No, sorry, I couldn’t disagree more with the thoughts behind @Bruce’s article.
Crying wolf because of G&F and the like make money is wrong (let alone that is is too late).
If they could make money from our data – Nice, let’s give them all we have.
G & F(+) would not harm us, if we were their source of money.

My – and likely your – core issue is that we have lost our trust.
Not in G&F(+).
But in government and LE.
We instinctively fear that they may/will abuse our data. They very likely will not protect us.
We do not trust them any more.
This is the real problem: Loss of trust, esp. in authorities.

Facial or whatever recognition? Old school.
In my village (~3000 inhabitants) everyone knows me, even without seeing my face. They know who I am, what I do, which wine I drink, what I like and what I don’t do and like.

No problem at all, I trust them.

Clive Robinson January 27, 2020 4:32 PM

@ David Lepic,

It’s not clear that we could design a cell phone that doesn’t know where you are at all times and still route calls to you in realtime.

Actually it is.

The original idea was to extend the pager network model.

That is your phone just listens and a broadcast tells it to call in for a call to be handed over.

The reason it did not go that way was due to some mathmatics that said that there were advantages to having a huge database of current locations. The vague argument was it would make connection times faster (actually it didn’t in practice).

The fact that needed a massive “back haul” and was at the time not realy scalable was not an impediment to those doing the math (it originated out of the fixed line Telcos…).

PromBound January 27, 2020 6:00 PM

Interesting article, but I have yet to be convinced that Facebook, Google, or any of the other tech giants is really doing anything diabolical with any of my personal data. If they want to push certain ad’s at me because of past buying habits, I can just ignore the ad’s. If you are really worried by this, just ditch your mobile device.

Ben Davies January 27, 2020 9:06 PM

Here’s a data point to show how far things have gone:

In a small tropical fish store with my grandson I noticed a large pair of tweezers on a card labeled ‘Reptile Feeder’. I’d never heard of such a thing before and picked it up and commented ‘Look, these are reptile feeders.’ and put them back.

The very next day I was ordering something on Amazon and guess what suggestion popped up in case I might be interested in buying it. Right, reptile feeders.

I have a Google pixel 4 and the Amazon app is on it. But Amazon has ALL permissions turned off. One of the other apps that do have permission to listen must have sold them the data. Those are: Android Auto, Camera, Gboard, Google Fi, Messages, Phone, Recorder, SayHi Translator, Skype, Tape-a-talk, Translate, You Tube.

I guess I have to say thank you to Google for making that information easy to find.

Clive Robinson January 27, 2020 10:45 PM

@ Bruce,

The two forms of discrimination that has most concerned me over more than four decades are,

1, Employment.
2, Universal benifts from taxes.

In the US I know your employment laws are weak and data bases of one form or another have been used against US citizens seeking employment or better employment. Heck your own Government routinely discriminates agains people it does not like for even the slightest of reasons, and your legal system further discriminates against such people by making the cost of remediation well well above what by far the majority can aford, thus dependent on pro-bono ot cause-celeb funding, if the government or banks etc do not find some reason to grab the money as PayPal and friends have been known to do on a myriad of occasions.

Whilst employment law is marginally better in the UK and considerably better in other European countries, there is also something else which is universal benifits. In the UK many people are entitled to legal aid to fight employment discrimination including that by the UK government be it the civil service or politic post holders.

But more importantly bad as it is there is also a safety blanket for people who are unemployed. This is especially usefull for those who get some kind of chronic illness that makes the unemployable for various reasons. Including those who want to work but are disbared from doing so by the effects of UK Health and Safety legislation compounded with an employers legal duty to have insurance for all employees[1]. Such people therefor don’t have employee health insurance either. Not that it is that important in the UK currently because every citizen is entitled irrespective of if they have worked or not to the benifit of the National Health System, such as it is these days.

As anyone who has diabetes in the US knows the “retail cost” of insulin is way way beyond many peoples income. Drug companies and US health insurers claim that nobody pays retail… Well that is obviously not true, otherwise there would be no retail price. There are people that don’t have health insurance in the US for a whole host of reasons and you can be assured that what they pay –unless they visit another country– is not far short of that US retail price.

There are a whole host of basic generic drugs that realy are very inexpensive to make that in the US are eye wateringly expensive. One such was the case of the Autoinjector pens of adrenaline –epinephrine– those liable to anaphylactic shock have to carry 25×365.25 (adrenaline was discovered at University College London and later made into a drug over a century ago).

Then there was “Pharma Bro” Martin Shkreli who hiked prices on a sixty year old generic drug that for reasons that apparently only apply in the US was out of cover but still a monopoly unlike else where[3]. Shortly after hiking the price from an exhorbitant $13.50 per pill to an unbelievable $750 he was arrested and later convicted[3]. Another company makes a varient replacment for $99/100tabs or slightly less than $1 a pill.

Those that have been discriminated against such that they can not gain work at sufficient to meet “US health care costs” end up bankrupt, and often face very early deaths.

Some more recently have been known to buy over the counter “vet drugs” for the likes of tropical fish as an alternative source of needed medications. They find the information “online” with the likes of the “SHTF” and “Prepper” sites[4]. Others have been known to commit crime simply because they will get a level of health care that will keep them alive.

Discriminating against people so that they are unemployable and bankrupt is dispicable at best, but doing it knowing that they are denied health care is really a shocking thing, effectively it’s a form of torture at best if not actual premeditated taking of a life.

[1] The UK health and safety at work act layes down a duty of care not just on an employer but an employee as well both to other employees but to the employer as well. They are thus legaly required to inform an employer of any condition or impediment that might effect others at the place of employment (even if it’s in the employees own home). Not to do so is an instant dismisal offence just as theft from the employer or other employees etc is. As with the likes of drivers and their insurers, employers are required to disclose all “risk” information to their insurers. Unlike with drivers an isurance provider can decide not to cover individual employees or charge what ever they like in premiums, it’s not considered discriminatory even though it is in practice. Thus some people who have been employed for less than two years who have start suffering black outs from an accident, brain surgury for cancer, heart problems, even unstable diabetes or sleep apnea can find themselves unemployed without legal recourse in the UK. Simply because an insurance company declines quite legaly “to take on the risk” then tells other insurance companies and banks as well…

[2] https://www.sciencealert.com/west-virginia-is-accusing-epipen-makers-of-inflating-its-price-by-500

[3] Sadly not for what he had done with the drug price, apparently that sort of price gouging is normall for BigPharma, but for running what was a couple of ponzie schemes disguised as hedge funds,

https://m.washingtontimes.com/news/2018/mar/9/martin-shkreli-pharma-bro-7-years-prison-fraud/

But Martin Shkreli’s criminality was some what greater as was his price gouging and other activities,

https://en.m.wikipedia.org/wiki/Martin_Shkreli

[4] just one of which is,

https://www.happypreppers.com/fishantibiotics.html

JonKnowsNothing January 27, 2020 11:02 PM

As been pointed out numerous other times, it is common practice globally. Addressing or banning the tech would have to be done globally. If the world nuclear disarmament agreements are any yard stick it won’t work.

There are other prohibitions like biological warfare that are “banned” but have quite active development in the US and Russia and who knows where else. These biological weapons are separate (in theory) from random outbreaks of diseases linked to our global trade, travel and business at any cost systems. Banning FaceIDs is as likely to happen globally as getting some of the nastier diseases sitting in military arsenals destroyed.

Additionally, folks are beginning to catch on that “not using” or “deleting records” or “purging records” never actually happens. In the USA the data retention periods cascade via a circular path through the Federal Agencies and ends up at 100 years.

Proxy Policing like Proxy Wars go on all the time. Even if A stops they get B to do it for them. The US regularly gets other countries to do Proxy Stuffs for us. Trump’s Wall is actually being done by the Mexican Government after all, via Proxy Use of Mexico’s police and militias to stop immigrants from applying for legal immigration by stopping them from arriving anywhere near a border.

But a bit further into the problem one might recognize that “recognition” and “discrimination” are only the window dressing of the issues. Full on forgery, deep fakes, time-line pollution and all the data manipulation that AI entails means that an entire Election can be run by Fakes produced not by the general public but by the massive industry that can produce fake content with biometric data, personal histories, even down to the last hair cut.

The ability to run huge sock-puppet systems past the NSA without a peep from them either indicate they cannot detect them or they do not want to stop them. These full image, personality, conversational massive networks seem to be targeted at “real people” but many show they rarely interact with “real people” and they more often interact with other sock puppets on the same network.

Given the ability to forge images, and documents beyond forensic detection, especially in the case where the evidence is government produced, shows the direction the whole system is going.

Just imagine: A live video feed of people voting in country XYZ, smiling, and inserting their ballots in the boxes, chatting about the latest gossip and their favorite entertainers, movies, music, what they had for dinner or plan to have that evening, none of which are real people, real votes, real ballots. A great Real Time Soap Opera.

In a recent article, the hypothesis is: AI can create books/stories/novels better than humans. An on target view of how to form a managed and directed system that uses “fake humans” for profit by faking-out the real humans.

ht tps://www.theguardian.com/commentisfree/2020/jan/27/artificial-intelligence-computer-novels-fiction-write-books

ht tps://www.spiegel.de/international/germany/facebook-how-to-fake-friends-and-influence-people-a-4605cea1-6b49-4c26-b5b7-278caef29752
(url fractured to prevent autorun)

Mark January 27, 2020 11:13 PM

@Bruce, much of your reasoning is built on some faulty data. For example, Google isn’t buying health records, but is being paid to store and process them. There’s difference. The linked article doesn’t claim anywhere that Google is getting to use the information, only that some health company is using Google Cloud services. The Fitbit acquisition may as well be to get a foothold in the wearable watch market and not to get health data (we’ll see, I guess, but my money is on the hardware bit). Google is not a data broker as you insinuate, but is operating an advertising platform. This is all far less sinister as you make it sound.

Untitled January 28, 2020 2:50 AM

Facial recognition technology as it stands is a problem in itself, apart from its role as a component of mass surveillance.
The problem is that it doesn’t work. The rates of false positives and false negatives are too high.

Scenario: Joe Doe is walking in the street. Facial recognition misidentifies him.
Cop grabs him: “Mike Badman, you’re busted.”
Joe Doe protests: “I’m not Mike Badman.”
Cop: “Yes you are, computer says so, get in the wagon.”
Despite having identity documents showing who he is, Joe Doe is thrown in the can. Hours or days later – perhaps only after intervention of a lawyer – the cops reluctantly admit that he’s not Mike Badman after all and release him.
Is this sort of thing what we want? Random arrests of innocent people?

ATN January 28, 2020 3:05 AM

So, are you saying the Tshirt I brought where it is clearly written “DO NOT TRACK” on the front and the back is not worth it?

Kostadin Kostadinov January 28, 2020 4:28 AM

It has long been the case that good security is essential for good privacy. However, a not-so-recent trend illustrates that, from a technical, legal and compliance standpoint, the tools, techniques and technologies that we use for security are, in both the short and long run, ultimately destructive of meaningful privacy. Unless we do something smart soon, we will lose both privacy and security.
Security is essential to protect privacy. But we need to be wary about sacrificing privacy to obtain security. Because even if we don’t see an impact to us personally right now, ultimately the massive data collection and use will come back to haunt us. As a wise person once said: “Justice will not be served until those who are unaffected are as outraged as those who are.”
https://securityboulevard.com/2020/01/the-symbiotic-parasitic-relationship-between-privacy-security/

fajensen January 28, 2020 8:26 AM

@PromBound:

Interesting article, but I have yet to be convinced that Facebook, Google, or any of the other tech giants is really doing anything diabolical with any of my personal data

Bit of a straw-man.

The immediate problem for “you and me” is that literally anyone may pay to access and use the data collected by Facebook, Google, Experian and so on and so forth for Whatever purpose that is meaningful or profitable to them. The Internet giants have made it their business to democratise formerly state-level-only surveillance and making it readily accessible to The Market, which right now are everyone with a credit card and a grievance.

Maybe someone doesn’t want to employ gays, If Only they could see if an applicant was likely to be one of Them? Maybe someone wants to bash some Pakistanis – If Only they could find out where they live. Maybe Iran wants to silence some dissidents? Maybe the burglars would like to find out which home to merely burgle and those where they can profit more from beating the credentials to their banking services out of them? Maybe my psychotic ex wants to know where to best abduct the kids from?

That data is not necessarily accurate because accuracy doesn’t that matter much to the add-slingers: As long as their customers believes in the service, the money comes in. When our data are used for other purposes, we may be secretly discriminated against or even actually robbed based on a false digital impression we somehow created.

The current situation is not an acceptable risk to impose on us.


“Government” is reluctant to regulate because it likes having all that data, which they could not legally collect outside of the old DDR, still being available ‘on the sly’, but at least “Government” is somewhat regulated. Unless of course one lives in Brazil, Poland, Hungary, Russia, China or just about anywhere in the Middle East. “Government” can also change it’s nature and then all that data can be used against anyone that “Government” doesn’t like.

JonKnowsNothing January 28, 2020 11:02 AM

@Untitled

re:

Scenario: Joe Doe is walking in the street. Facial recognition misidentifies him.
Cop grabs him: “Mike Badman, you’re busted.”
Joe Doe protests: “I’m not Mike Badman.”
Cop: “Yes you are, computer says so, get in the wagon.”

A great number of folks around the world face this situation everyday without a computer face ID.

Reordering your statements a bit:

Scenario: Joe Doe is walking in the street.
Cop grabs him: “Face the wall, spread your legs, you’re busted. Get in the wagon.”
Joe Doe protests: “I haven’t done anything wrong!”
Cop: “Yes you have, I will get the computer says so.”

SpaceLifeForm January 28, 2020 12:53 PM

@ Clive

“Discriminating against people so that they are unemployable and bankrupt is dispicable at best, but doing it knowing that they are denied health care is really a shocking thing, effectively it’s a form of torture at best if not actual premeditated taking of a life.”

The main reason for those tactics is to disenfranchise potentional voters that would never vote for the fascists.

Electron 007 January 28, 2020 3:50 PM

These efforts are well-intentioned,

Aren’t they always?

our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; … law enforcement’s usage of that corporate platform … law enforcement’s use of the technology … clear civil and constitutional rights issues.

Law enforcement itself is above the law in the United States. Like the men at Lot’s door, they are without restraint in their dissolution. https://www.biblestudytools.com/lsg/2-pierre/2-7.html

No regulation will ever touch the “data broker” industry or apply to that technology if it is anything “law enforcement” has a use for. Any laws passed to regulate it will be reinterpreted by constructivist courts, twisted around and perversely applied to punish the very people they are ostensibly intended to protect.

The government no longer even bothers to maintain even its usual shallow pretense of respecting the civil and constitutional rights of the people.

name.withheld.for.obvious.reasons January 28, 2020 10:38 PM

@ Bruce, the UnUsual Suspects

This is a major reason that my concern about the “discrimination of one” technology is upon us. What is required of me, for example; living in a faraday cage, corporate shell, proxied services (physically and financially), variations in use cases for electronic equipment of many types. No networked, or smart phone, no public information, and a name that was given to me by my parents (they didn’t know at the time) that is most common. All of these things, including using cash to purchase goods and services when necessary–has an ever shrinking counter-effect. My internet activity is very limited in scope, less than a dozen sites traversed at any time for decades. Information is primarily acquired using retrieval techniques, not browsing. Wget, ncat, and curl are useful but TLS means more metadata leakage than is comfortable. DNSsec and DNS of TLS are barely useful at this point–someone mentioned “remembering IPs” but that doesn’t obfuscate ones own position (prefer VPN over TOR, today)–staying unconnected is a good strategy.

I have become too lax, too complacent of recent and risk more than discovery…but…there is a huge gap in the historic record, so to speak. Correlated data sets will prove less than useful in my case, but, this fact alone will have its own special effects.

Telling this story isn’t fun, nor necessary but I can say that the current geo-political environment feels most uncomfortable. It is difficult for me to say that there is a representative democracy in the United States. We need a new canary, and it will be most problematic to find one that works. Courage and persistence may be the most prescient words to live by at this point. Let you know if this changes, but not hopeful.

Roland Turner January 29, 2020 1:29 AM

A nitpick on terminology: personal data that has an identifier that is internal to the system that recorded it (an ID number, a cookie, a picture of a face) is perhaps better described as pseudonymous than anonymous in that the identifier is itself a pseudonym. Anonymous data is that which doesn’t relate to a specific natural person at all, typically aggregate data. e.g. “The population of NYC is about 8 million people”.

(There are technical threshold questions and jurisdictional variations of course; I’m advocating a first approximation approach.)

SpaceLifeForm January 29, 2020 1:40 PM

@ name....

Hindsight is 2020. Vote.

Did you know that in 2016 more registered voters FAILED to vote than the number that voted for trump?

Spellucci January 29, 2020 3:06 PM

@PromBound, there is a multi-billion industry that is working to get you to to fall for ads. I wouldn’t call that diabolical, but I would bet they know how to trick you into buying things you don’t mean to.

gordo January 29, 2020 3:13 PM

FUD as pre-election meddling narrative and self-fulfilling marketing policy, er, ploy . . . i.e., there may be some truth to that?

MANUFACTURING FEAR
How Government and Media Are Prepping America for a Failed 2020 Election
Russia, China and Iran are already being blamed for using tech to undermine the 2020 election. Yet, the very technologies they are allegedly using were created by a web of companies with deep ties to Israeli intelligence.
by Whitney Webb, MintPress News, January 28th, 2020

Part of the reason for the recent pick-up in fear mongering appears to have been the release of a joint statement issued by key members of the Trump administration last November. That statement, authored by Attorney General Bill Barr, Defense Secretary Mark Esper, acting DHS Secretary Kevin McAleenan, acting Director of National Intelligence Joseph Maguire, FBI Director Christopher Wray, NSA Director Gen. Paul Nakasone, and Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs, claimed that foreign interference in 2020 was imminent despite admitting that there is no evidence of interference having taken place: . . .

https://www.mintpressnews.com/media-israel-intelligence-2020-elections-cyber-security/264361/

vas pup January 29, 2020 4:02 PM

How worried should we be about ‘Big Brother’ technology?

https://www.bbc.com/news/business-50673770

“But von Braun might not have anticipated that he was also witnessing the birth of another hugely influential technology – one the Gestapo would have loved in its modern form – closed-circuit television, better known as CCTV.

The pictures in that control room were the first example of a video feed being used not for broadcasting, but for real-time monitoring, in private – over a so-called “closed circuit”.

The top brass at Peenemünde may have worked slave laborers to their deaths, but they had no intention of joining the fatalities. Instead, they invited television engineer Walter Bruch to devise a way for them to monitor the launches from a safe distance.

And that was wise, because the first V2 they tested did indeed blow up, destroying one of Bruch’s cameras.

Exactly how popular Bruch’s brainchild has now become is tricky to pin down. One estimate, a few years old, puts the number of surveillance cameras around the world at 245 million – that is about one for every 30 people. Another reckons there will soon be over twice that number in China alone.

It is certainly true that the market is expanding quickly, and its global leader is a company called Hikvision, part-owned by the Chinese government.

…surveillance cameras will feed into the country’s planned “social credit” scheme. Exactly how the national system will work remains unclear, but various trials are using ===>both public and private sector data to score people on whether they are a good citizen.

You might lose points for driving inconsiderately, paying your bills late, or spreading false information. Score high, and perks might include free use of public bikes; score low, and you might be banned from taking trains.

The aim is to encourage and reward desired behavior – or, as an official document poetically puts it, “allow the trustworthy to roam everywhere under heaven, while making it hard for the discredited to take a single step”.

The comic artist Zach Weinersmith sums up the value proposition like this:

“Can I put a device in your house that perpetually listens to everything you say and do, stores that information, profits from it, and doesn’t give you access to it?”

“You’d have to pay me a lot.”

“No. You’ll pay us.”

“Uh… pass?”

“The device can figure out when you’re low on Cheez Balls and drone-deliver them in 30 minutes.”

“Give me the machine!”
Amazon and Google hasten to reassure us that they are not snooping on all our conversations.

They insist the devices are just smart enough to listen for when you’re saying the “wake” word – “Alexa”, or “OK Google” – and only then do they send audio to the cloud, for more powerful servers to decipher what we want.

Then we have to trust that these devices are hard to hack – for criminals, and perhaps for governments. Of course, not everyone baulks at the thought of the state knowing more and more about our daily lives.

One Chinese woman told Australia’s ABC that if, as her government said, every corner of public space was installed with cameras, she would feel safe.

!!!!!!!!!!!That’s the idea of the “panopticon”: if you think you might be being watched, you will always act as though you are. It is an idea George Orwell understood perfectly.”

My take: the problem is that more privacy is taken than actually security provided in return. Balance as usually not in favor of average Joe/Jane 🙁

Clive Robinson January 29, 2020 6:28 PM

@ vas pup,

My take: the problem is that more privacy is taken than actually security provided in return. Balance as usually not in favor of average Joe/Jane 🙁

It’s actually worse than that, because with freedom comes responsability. For people to become responsable they first need a degree of self reliance.

If you take away freedom and replace it with “safety” people will become increasingly like “pampered pooches” two things will happen,

Firstly certain people will not alow us to be “pampered” their philosophy is “What is their’s is their’s and what is your’s is their’s” we might dress it up as “rent seeking behaviours” but the real point is “status” it’s the “king in his castle the surfs in the fields”. The fact mankind would regress back to pre medieval existance would matter not a jot because the difference in status would have increased many many orders of magnitude. Remember the thing about surfs is they cost less to run than slaves as you don’t carry their bord and lodgings cost including that of guard labour. If you doubt this look at the cost of living in a two or even four star hotel versus a prison for the same length of time.

Secondly one of the things noted about live stock and pets, their brains shrink very rapidly with each generation. If you have ever seen the few truly wild cattle still in Europe or wild boar or wolves they would realy scare you in comparison to cows in fields hogs in pens or dogs on leads. Whilst part of this is breading for more docile live stock and pets, there is rather more to it. Self reliance requires intelligence both inate and developed. Likewise reaction times and physical acuity require developed ability. If you lead a life style where you are in effect live stock then those skills will not develop and anything up to 1/4 of brain volume will be lost in as little as five to ten generations.

The above are very real issues with surveillance and society that nobody want to talk about, and pretend will not happen because the human race is generaly over optimistic, and sociopaths care only about themselves. Which means only scientists and engineers will tell you. Less so the scientists because they are split into theoretical and experimental, the former just seek new questions to ask by formulation, the latter know what engineers and artists know, the first important time is when Murphy[1] will do his stuff…

As nearly everyday is an important first for engineers, because it’s engineers that provide the foundations for everyone else’s lifestyle, which even on a good day makes the more adept of them natural pessimists, to an engineer Murphy is not an imaginary friend he’s the assistant from hell, thus they cover their bases. But know in their hearts “over engineered” is never a match for probability’s quirks that mysteriously align at the most in opportune of times[2].

[1] https://en.m.wikipedia.org/wiki/Murphy's_law

[2] It also works the other way, but then engineers are never given credit for success, that’s “good Managment”. No an engineers lot in life like that of an experimental scientist is to be invisable in the good times, cursed, cussed and kicked in the bad.

JonKnowsNothing January 30, 2020 1:13 AM

JRR Tolkien’s works edited and published by the late Christopher Tolkien provide some interesting challenges to concepts of good, evil, destiny and futility, courage, pride and arrogance.

Other than the fantasy aspects of the stories, books like The Children of Húrin are particularly challenging for folks stuck with an American Education which rarely spends more than 1 week discussing any major civilization or culture or idea and so concepts presented in the story are particularly jarring to our American Story of Manifest Destiny.

A particularly difficult aspect is that we expect Heroes to always be Heroes. In these stories the Hero fails over and over to become the kind of Hero that Hollywood provides in it’s movies.

Within the stories is the over arching story of Good and Evil. Some aspects that cause a bit of dissonance are when Good becomes Evil. Rarely does Evil retreat after corrupting the Hero.

To defy Evil, brings a terrible fate, which cannot be avoided and only compounded by attempting to evade it. It takes courage to defy evil, even as evil triumphs with the death of the hero, evil’s power diminishes.

While it’s all fantasy, these are lessons to be considered.

Today, I had a conversation with someone who touted that they had the latest in all the tracking technology: Fitbit, phone, apps galore. They knew all the devices were tracking them. Additionally, because of working a gig economy job, they had to have location tracking on, along with specialized time and performance tracking apps. While the person acknowledged the intrusion was severe, they said it was “inevitable” and since they couldn’t do anything about it “acknowledged it and accepted it” as part of their daily life.

I considered the story of Húrin, forced by evil to watch the destruction of his family, friends, home and all that was dear to him.

It seems a bit close to home.

The last King of Gondor, Eärnur, and a few chosen knights rode into Minas Morgul to challenge the Witch-king of Angmar, and never returned.

ht tps://en.wikipedia.org/wiki/The_Children_of_H%C3%BArin
ht tps://en.wikipedia.org/wiki/Kullervo
ht tps://en.wikipedia.org/wiki/Kalevala
(url fractured to prevent autorun)

name.withheld.for.obvious.reasons January 30, 2020 10:34 PM

@ JonKnowsNothing

Thank you for a most elevating story–or was it? You get it–and it appears Clive does as well. There is no quarter to be given at this point and only decisive and incisive actions will be sufficient to answer the Neo-fascist-magical-kleptocracy that seems to be taking hold world-wide. Turn-key tyranny, who turned the key?

name.withheld.for.obvious.reasons January 30, 2020 10:42 PM

@ Clive

I like your use of the word “surfs”, it sounds like fun. But if you are meaning serfs, then I am less than happy with the analogizing.

This spot left blank January 31, 2020 12:07 PM

Every technology invented had people misusing it. I don’t hate facial recognition; I hate how people are misusing it.
The idea that the FBI is investigating a crime and could use it to eliminate me as a suspect makes me hopeful that the technology can do good. In the old days, they could just pull in “the usual suspects”. Now, they can confirm that I wasn’t in the area where the crimes were committed and not bring me in for questioning. I just don’t want them relying on it over other facts.

Aggregation of data is also an issue. No matter how much you anonymize it, someone can correlate it with other information and infer you identity.

I really hate that commercial companies have this data and trade it like baseball cards (and lose it in a hack). There is no way to erase it once it gets out there into the ether.

“They can’t stop the signal, Mal. They can never stop the signal.”

name.withheld.for.obvious.reasons February 3, 2020 5:02 PM

@ SpaceLifeForm

Yes, am too well informed as to the elections of 2016 and 2018, and fear for 2020. It is the target with the concentric red circles that has painted over the philosophical underpinnings (and over-pinnings) of the democratic republic formally known as the United States of America. Soon to be renamed the States of Delusion and Denial, in nothing we trust.

JonKnowsNothing February 4, 2020 1:43 PM

Interesting report on automated traffic ticketing in Australia.

A FOIA document about automated ticketing in Australia showed:

Aboriginal drivers received 3.2 times more fines from being pulled over by police than non-Aboriginal drivers. But when tickets were issued by traffic cameras, Aboriginal drivers received slightly fewer penalties on average than non-Aboriginal drivers

Did it help anyone? Evidently not:

police said they had not acted on the report because decisions on what actions would be taken to reduce Aboriginal incarceration were a matter for cabinet.

Seems like the LEO asset forfeiture revenue is just too good a deal to drop:

On average, the report found, Aboriginal drivers in WA receive 1.75 times more penalty units over their lifetime of driving than non-Aboriginal drivers. That is about $1,260 more in fines for Aboriginal drivers, “almost entirely driven by police-initiated, on-the-spot infringements”.

ht tps://www.theguardian.com/australia-news/2020/feb/05/aboriginal-drivers-in-wa-more-likely-to-get-fines-from-police-officers-than-traffic-cameras
(url fractured to prevent autorun)

JonKnowsNothing February 10, 2020 12:13 PM

Reported from the NewK:

IPT = Investigatory Powers Tribunal

Judge orders MI5 not to delete databanks before end of surveillance trial
Service accused of illegality in its capture, storage and use of interceptions of bulk data>

The case is an attempt to establish the extent of internal failures – admitted last year by MI5 – over the way the Security Service captures, processes, stores and uses the bulk interceptions of data acquired through surveillance and hacking programmes.

Previous IPT judgments may even need to be reopened….

MI5 appears to have led ministers, the investigatory powers commissioners and judicial commissioners to grant warrants … on a false basis as to its arrangements for the retention, review and destruction of personal data obtained by bulk and other means over an extended period.”

That illegality may stretch back as far as 2010

Isn’t it odd:

  • First we don’t want them to get take the data
  • Then we let them take the data with limits
  • But they take the data without limits
  • Of course they think we won’t find out they took the data
  • When they get caught they want to delete the data the shows they took the data sans-limits
  • Now we have to tell them not to delete the data they took because it’s evidence not against someone else (their preference) but against them.

Wonder if the NewK will have any better luck than the USA did telling the CIA not to delete their Torture Video Library. Of course it was deleted once the word got out. Gina got full marks, promotion and is now running the show. Good soldiering.

ht tps://en.wikipedia.org/wiki/Investigatory_Powers_Tribunal

the Investigatory Powers Tribunal (IPT) is a judicial body, independent of the British government, which hears complaints about surveillance by public bodies—in fact, “the only Tribunal to whom complaints about the Intelligence Services can be directed”.

ht tps://en.wikipedia.org/wiki/Jose_Rodriguez_(intelligence_officer)
ht tps://en.wikipedia.org/wiki/2005_CIA_interrogation_tapes_destruction

The CIA interrogation videotapes destruction occurred on November 9, 2005.[1] The videotapes were made by the United States Central Intelligence Agency (CIA) during interrogations of Al-Qaeda suspects Abu Zubaydah and Abd al-Rahim al-Nashiri in 2002 at a CIA black site prison in Thailand.[2] Ninety tapes were made of Zubaydah and two of al-Nashiri. Twelve tapes depict interrogations using “enhanced interrogation”, a euphemism for torture.[3] The tapes and their destruction became public knowledge in December 2007.[4] A criminal investigation by a Department of Justice special prosecutor, John Durham, decided in 2010 to not file any criminal charges related to destroying the videotapes.

According to Rodriguez’s memoir, Gina Haspel was responsible for “draft[ing] a cable” ordering the destruction.

ht tps://www.theguardian.com/uk-news/2020/feb/10/judge-orders-mi5-not-to-delete-databanks-surveillance-trial-interceptions-bulk-data
(url fractured to prevent autorun)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.