Manipulating Machine-Learning Systems through the Order of the Training Data

Yet another adversarial ML attack:

Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.

So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set ­ then let initialisation bias do the rest of the work.

Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.

Research paper.

Posted on May 25, 2022 at 10:30 AM34 Comments

Comments

Frank Wilhoit May 25, 2022 12:50 PM

The fact that machines are “better” than humans at certain aspects of certain things leads, via sloppy and magical thinking, to the notion that they can somehow solve problems that we can’t solve (and therefore can’t teach them to solve).

This is magical thinking. The harder(*) a problem is, and therefore the greater the theoretical benefit from automating it, the harder it is to program the automation.
Brass players have a saying: “If you can’t play it on the mouthpiece, you can’t play it on the horn.” We have enough trouble playing “accounting manual” on the mouthpiece, and that is why it sounds so bad on the horn. We are nowhere near, just to take one conspicuous example, to being able to play “self-driving car” on the mouthpiece.

On some level, everyone knows this, and that is why the purpose of most technology adoptions is not to solve problems but to obscure responsibility.

(*) Yes, I know that there are many, profoundly distinct kinds of “hardness”. I think it is clear from context which ones are involved here.

David Leppik May 25, 2022 1:24 PM

This could partially explain why it’s been so hard to train face recognition systems to recognize black faces. Because training time is so expensive, a model is typically retrained using new data rather than rebuilt from scratch when bias is discovered. Thus, you start with a largely white data set (originally MIT grad student class photos; later photos scraped from media/Internet, heavy on celebrities and the tech savvy) and try to compensate with different images.

In fact, it is standard practice to take a pre-trained general model (e.g. MobileNet for image classification) and retrain or fine-tune it for your particular needs.

This implies that it’s entirely the wrong approach when the original training data was biased.

Miles Brim May 25, 2022 1:42 PM

An interesting idea. Depending on your poisoning objective, you could do all sorts of computations to find the “optimal” way to poison the network with the available training data. In fact, you could probably build a NN to solve this and another NN to detect if something like it has occurred with your training data.

That being said, there are a number of safeguards against this (in addition to the built-in option to randomize the training data on each epoch). If you are designing a network using best-practices, you should engage in hyperparameter optimization and cross-validation. One hyperparameter would be mini-batch size. This combined with a k-fold cv will make it nearly impossible to create a bulletproof poisoning process. You would also ideally customize the validation process according to a measure of goodness that is most useful to you. If your validation objective is to find the network that maximizes loan profitability, to use your example, you should wind up with an architecture that is fairly robust to the poisoning you have described. You can also embed such a safe-guard in the training loss-function. Think of physics-informed NN, except here it would be security-informed NN… Hmm, did I just invent a new class of networks?

Leroy P Smythe May 25, 2022 2:44 PM

It’s all very well Miles, but one isn’t allowed to maximise ’loan profitably’ anymore. One had to have a ‘diverse’ customer space blind to statistical realities. See, as an early example, the sub-prime debacle.

John May 25, 2022 4:17 PM

Hmm….

Yet another definition of lazy!

Our machine says you are ‘the’ suspect and that you are guilty!

John

SpaceLifeForm May 25, 2022 4:49 PM

Let the ML chew on these biscuits

hxtps://nitter.net/MalwareJake/status/1529268935505457152#m

Ted May 25, 2022 5:42 PM

I don’t mean to be weird here, but I’d kind of like to see equally controversial data points in the paper as was mentioned in the blog post. I probably missed it though.

jbmartin6 May 26, 2022 6:30 AM

I am quite nonplussed to learn that the order of training data can be so significant. To my mind, that pushes almost all the use cases into the useless category. What if your random order just happens to pick 10 men first, or five of the super rich? How can the trainers establish that their order of training did not include any kind of introduced bias?

fib May 26, 2022 7:27 AM

The order of the images is beside the point, since, for assisted ML at least, the tagging/labeling of images [the laborious, very human process that make ML seem intelligent] is prolematic in itself.

Kws: label bias machine learning

cmeier May 26, 2022 7:56 AM

We assume that the defender is trying to train a deep neural network model with parameters θ operating over Xi ∼ Xtrain, solving a non-convex optimization problem with respect to parameters θ, corresponding to minimization of a given
loss function L(θ).

That “non-convex optimization problem” is the key. Gradient descent is a hill climbing algorithm. By choosing the order in which data is presented to the algorithm, they are forcing the algorithm to find a local maximum rather than a global max. Can this be ameliorated by running the training multiple times using different starting values for the parameters or by, as you approach a seeming maximum, taking a giant step away from that max and seeing if you continue to approach that same max?

Sm May 26, 2022 9:07 AM

@cmeier
Yes, that is done, you have variations, like the stochastic gradient descent, several random starting points, etc …

There is an interesting article called the lottery ticket that sort of says that it works of you are lucky enough to get a good starting point.

Random = "fair"? May 29, 2022 12:27 PM

No, no, no. This highlights a much bigger problem than people are talking about. Any time any very small sample (such as a small initial set) has a huge influence on a very large sample (such as all the rest of it)… you have an inherent weakness that calls into question the entire system. Even if that small influential set is fully random, you will still have random bias.. and just because it’s random doesn’t mean it isn’t bias. The whole purpose of using a larger and larger set is to reduce random bias. If you let any small set have such a big influence, the entire system is always going to be biased in the way that the small set was, whether maliciously intentional or random it doesn’t matter.

For example: If you randomly pick 1’s and 0’s it’s possible to have 3 1’s in a row, right? It’s less likely but also possible to have 4 in a row, and 5 in a row, 6 in a row, etc… So if the initial 3, 4, 5, 6, etc “count” for more than any subsequent 3, 4, 5, or 6 in a row, then the whole system is severely compromised in the sense of its “fairness” to every random group of 3, 4, 5, or 6 in a row… This is true no matter where your more-influential set is. This is all just basic logic.

TL;DR is it’s not that that algorithm must ensure a fully random initial set to be “fair”… but that that whole algorithm that has this problem is trash, if you want any sort of “fair” system. It should never be used AT ALL for any sort of “fair” system, regardless of how “random” your smaller initial set is. It will always be biased to that small set. It’s inherent.

Emoya May 31, 2022 11:10 AM

Random = “fair”? makes a strong argument.

The belief that randomly choosing a sequence avoids bias assumes that all samples are weighed evenly, which they are not; rather, the influence of each successive sample decreases.

Two problems need to be solved for an algorithm with this property:
1) How to instill fairness in training?
2) How to prove the fairness of results?

Because bias in such an algorithm can be manipulated, results must be confirmed unbiased before allowing deployment of the system. Furthermore, proving fair results implies fairness in training.

This can be accomplished by training multiple isolated systems with the same samples but varying the sequence, then comparing the behavior of the resulting systems to one another. If there is a consensus (no significant statistical difference in the resulting behavior) across an adequate number (another topic of research) of independently trained systems, this can be used as evidence of fairness. Any one of the agreeing systems can then be used, but all of the systems, samples, and sample sequences must be retained to support fairness claims.

Alternatively, a method for “averaging” trained systems could be developed. Assuming that ML system averaging is achievable and provable (without bias), this would likely be the more work-intensive approach to implement, as the only way to ensure fairness in the resulting final system would be to train isolated systems on all meaningful permutations of the sample data before merging to cancel out any weighted influence based on the sample sequence.

Both of these approaches eliminate the ability to poison the final system by tamper-proofing the sample sequence. Of course, rather than fixing a broken algorithm, a different one that is not influenced by sample sequence could be used.

Winter May 31, 2022 12:25 PM

@Emoya, Random = “fair”?

Two problems need to be solved for an algorithm with this property:
1) How to instill fairness in training?
2) How to prove the fairness of results?

Fairness is an ethical outcome. In AI, the “fairness” of the outcome of the application is what counts. The whole process of the creation of the AI must be controlled, tilted, to get such an outcome. Manipulating, or organizing, the training phase, is just one aspect.

Sometimes, the historical data must be severely curated and adapted before they can be used for training.

A case in point was an algorithm to assign patients to extra care based on data on total health-care costs:

ht-tps://www.nature.com/articles/d41586-019-03228-6/

The researchers found that the algorithm assigned risk scores to patients on the basis of total health-care costs accrued in one year. They say that this assumption might have seemed reasonable because higher health-care costs are generally associated with greater health needs. The average black person in the data set that the scientists used had similar overall health-care costs to the average white person.

But a closer look at the data revealed that the average black person was also substantially sicker than the average white person, with a greater prevalence of conditions such as diabetes, anaemia, kidney failure and high blood pressure. Taken together, the data showed that the care provided to black people cost an average of US$1,800 less per year than the care given to a white person with the same number of chronic health problems.

Is_AI_Broken June 1, 2022 9:54 AM

If this paper is accurate across the board, it begs the question of how to acquire adequately broad and random data to feed into the system in the first place. In other words, if the data that you feed into the [algorithm that randomizes that data to feed into the training system] is a select subset of real-world data, you still have a problem. The subset of data is selective in the first place, and therefore biased, and therefore poisoned, correct? So, does it then follow that for the training not to be biased/poisoned, the training data must be at least representative of (or ideally, broader than) the real-world system it’s designed to analyse?

Who, then, decides what is adequately representative? What safeguards are there that the system is not used beyond its training parameters?

In my limited experience of working with computers and people, I’ve seen that a designer (or team of designers) will never think of all contingencies. I like telling my programmers their job is not done when they can get their code or system to work, but when they can get it to not break. That’s just not possible in real life, but it gets them thinking in more realistic terms. That’s why you have bug fixes: to correct things you didn’t think of in the first place. If, as this paper seems to say, you can’t “bug fix” a trained AI system, then AI is dead in the water; it will never be suitable for the real world for any but very specific, limited purposes.

Emoya June 1, 2022 10:58 AM

@Winter

Fairness is an ethical outcome. In AI, the “fairness” of the outcome of
the application is what counts.

I concede. My reasoning may apply to systems where equality is the primary goal, but the purpose of many systems is fairness and fairness is not always equal.

Whereas equality within the scope of a system is objective and can be proven, fairness may not be entirely definable by the rules of or understood by the system. Since fairness can be complex and take into consideration factors unknown to the system, training must fill the need of producing fairness through trainers’ discretion. However, despite efforts to remain fair and unbiased, AIs have been found, time and again, to inherit unconscious/unintentional bias instilled in their training.

Also, utilizing stochastic methods during training along with algorithms that weigh samples differently based on sequence inherently introduces a probability of the resulting AI being biased.

Current methods of ML are unable to consistently produce fair/unbiased AI in presumably unmanipulated systems, so the task of determining whether or not any bias is “normal” or due to tampering may be impossible, especially if subtle.

Without a reasonable assurance of identifying tampering during or after training, how can the resulting AI be trusted to perform as intended in a broad sense?

It seems that all of these issues will remain unsolved until truly explainable AI becomes a reality.

Winter June 1, 2022 12:56 PM

@Emoya

Without a reasonable assurance of identifying tampering during or after training, how can the resulting AI be trusted to perform as intended in a broad sense?

The current trend is that an AI system must be able to “explain” which factors of the case lead to the classification or decision. Such behavior can still be subverted, like it is in human prejudices that are explained away, but allow accountability.

JonKnowsNothing June 1, 2022 1:19 PM

@ Winter, @Emoya

re: n AI system must be able to “explain” which factors of the case lead to the classification or decision. Such behavior can still be subverted, like it is in human prejudices that are explained away, but allow accountability.

It’s much like pet owners describing the various behaviors of their pets.

  • My pet does X
  • My pet does not do Y
  • My pet has ALWAYS done THAT
  • I’ve tried to stop THAT but my pet still does it
  • I’ve used every tip I’ve read on the internet or in books and NOTHING works
  • I’ve watched lots of Video How To but my pet still does THAT

The only difference really, is a pet is a living being, unless it’s a pet rock.

AI and Pet Rocks have a lot in common.

  Pet Rocks have STAY dialed in 100%.
  AI has “failure mode” dialed in 100%. You might not know until the AI leaves a calling card…

Winter June 1, 2022 4:05 PM

@JonKnowsNothing

It’s much like pet owners describing the various behaviors of their pets.

No, nothing like that. It is like [1]
Q: Why did you classify it as a Duck?

A:

  • 1 It says “quack”
  • 2 It walks like a duck
  • 3 It swims like a duck
  • 4 It has the feet of a duck
  • 5 It has the bill of a duck
  • 6 It has the size of a duck

[1] In real cases, the output it the “symptoms” as entered in the training phase. More on explainable AI
ht-tps://www.sciencedirect.com/science/article/pii/S0740624X21001027

JonKnowsNothing June 1, 2022 6:17 PM

@Winter, @All

re: The AI Duck Waddle

Total 100% failure in the USA.

Quacks and Waddles are the idiom definition for Taxes, Fees, Impounds and Other Government Revenue Generating Euphemisms.

The Duck Waddlers have been Quacking since Ronald Reagan’s time. Augmented by Ross Perot, Pat Buchanan followed by “Contract with America” minded folks, right up to today’s lot.

Any Quacker might just be Duck Soup.

====

Search Terms

Duck Soup (1933 film)

“Duck soup” was American English slang at that time; it meant something easy to do. Conversely, “to duck something” meant to avoid it.

When Groucho was asked for an explanation of the title, he quipped,

“Take two turkeys, one goose, four cabbages, but no duck, and mix them together. After one taste, you’ll duck soup for the rest of your life.”

JonKnowsNothing June 1, 2022 6:38 PM

@Winter

fwiw: There are many different types of ducks. They come in many sizes, shapes, colors, plumage types along with toe webbing differences.

Recently I saw a new duck to me. It’s called a Black-bellied Whistling Duck. There are 8 subspecies of these ducks.

It’s quite magnificent. It also doesn’t quack.

===

h ttps://en.wiki pedia.o rg/wiki/Whistling_duck

ht tps://e n.wikipedi a. org/wiki/Black-bellied_whistling_duck

ht tps://www.all aboutbirds. org/guide/Black-bellied_Whistling-Duck/overview

(url lightly fractured)

Clive Robinson June 2, 2022 1:32 AM

@ Winter, JonKnowsNothing, ALL,

Re : Why did you classify it as a Duck?

Your tests fail as I’ve frequently mentioned with white ducks and white geese seen together in farm yards and orchards.

The goose will pass the tests you propose, yet ducklings will not.

However the duck or goose knows what it is, and so does a vet and many poultry breeders.

Oh and have a look into why the ugly duckling story exists.

Oh and look up the “red panda” also known as the “firefox”, “red catbear”, and the fun that has caused for nearly two hundred years… It has the guts of a carnivor, yet eats bamboo, it appears to have a “gripping thumb” looks and does behave like a cat or weasel or fox yet lives in trees and may have originated in Europe…

The fact something looks like something else in part(s) and has similar behaviours is no reason to say they are the same or even have common roots.

I suspect you know the “Three blind men and the elephant” story, well how about making one up for the duck billed platypus?

Well have a think about how that might apply to,

1, Machine Learning
2, Soft Artificial Inteligence
3, Hard Artificial Inteligence

The simple fact is we do not actually know how humans learn, we just throw babies and infants in with others and see what “rubs off”. Then when they can communicate with adults we throw information at them untill they become more like us…

name.withheld.for.obvious.reasons June 2, 2022 2:18 AM

A qualified diatribe that best encapsulates the subject…

“What else floats in water?” Gravy, small rocks, apples–from behind the crowd gathered…”A duck!”

“Who are you that are so wise in the ways of science?” …later in the scene…

“So if she weighs the same as a duck, she’s made of wood.” …”She’s a witch. Burn her!”

“I shall get my largest scales.”

I have a difficult time seeing how this isn’t any different.

Winter June 2, 2022 2:53 AM

@name., Clive, Jon

Your tests fail as I’ve frequently mentioned with white ducks and white geese seen together in farm yards and orchards.

Which is of course the whole point of the exercise.

This is not a test, this is the AI telling you how the AI got to the answer a duck.

This is not the correct list of features that defines a duck. It is the list of features the AI used to do the classification.

If these features are not correct/distinctive, you know why the AI made this classification.

If your medical AI classifies a patient as not needing extra (expensive) care and the most important parameter in that decision was her postal code/address, you know immediately that there is something terribly wrong.

Which was the point of including the explanation in the first place.

Clive Robinson June 2, 2022 6:21 AM

@ name.withheld…,

Nice to see you post again, I hope you are well and things are moving on an upward trajectory.

With regards,

I have a difficult time seeing how this isn’t any different.

Ahhh the difference between a creative and empiricist outlooks on life, and the gap between…

An empiricist believes what there senses experience, a creative believies what their mind can conjure up. At one end experienced fact, at the other fantasy. With a lot both real and imaginary inbetween.

The empiricist is trapped by their limitations, the creative falls with their castle in the air. Neither takes mankind, society or much else forward.

In between however where imagination alows new experiences to be sort after and brought unto the touch, is where things move forward.

But the truth of the senses must keep the flight of imagination firmly grounded, so as to provide the foundations on which we can build.

Whilst a human might have the same mass as a duck, well… Such knowledge is of small relevance when speaking of flamability. An empiricist would know this, a creative would probably care not as long as the image in their head looks good.

So a soggy old woman fails to conflagrate, and the creative has yet another disapointment as does their audiance. The empiricist simply shrugs and says “What did you think would happen?” as they turn, the soggy old woman shivers and catches a cold…

When you’ve had an early bath, that is as they say “life buoy” or it would be if soap were involved[1].

[1] This joke may only work where Unilever still sell it, but it was the butt of many a joke including a wag adding back in 1935 the tag of “And they still stink” to the advertising strapline of “The Phillies use Lifebuoy”,

https://upload.wikimedia.org/wikipedia/en/e/ef/Baker_bowl_right_field.png

I won’t ask if you also remember it 😉

Clive Robinson June 2, 2022 6:48 AM

@ Winter, ALL,

If these features are not correct/distinctive, you know why the AI made this classification.

But not why it chose the classitier rules.

And this is the big problem.

If you have a very large set of data you can find it’s average in one of a number of ways, but they should all at the end of the process give the same result.

However if a dataset is very large “and randomly ordered” you can find a very close aproximation to the average using just a subset of the compleate set of data. This is usefull for several reasons one being that of an infinite data set, or one where the gathering of data is in progress.

If you have a compleate data set the ordering of the data makes no difference to the average (or should not) but that is far from true with an incompleate data set.

Some algorithms use a series of very short run averages, and look for trends. If those short run averages are not random, then the trend information can be entirely wrong.

After a moments thought many will realise that there is little diference between these “supposed” AI systems and a rolling average algorithm…

So if the input sample is not randomly ordered “in the right way” then the AI clasifier will be wrong.

Unfortunately most AI rule funders are overly sensitive to initial input / training data and it effectively “burns in a grove”. Knowing this you can take a large random sample of data, then order it in a specific way to distort the classifier in a specific way.

This is a problem that has been known in control theory for a very long time where multiple tracking loops running at different speeds are used.

It’s just one of several reason I do not trust these AI systems in any way.

JonKnowsNothing June 2, 2022 9:24 AM

@ Winter, @ Clive Robinson, @ALL

re: If your medical AI classifies a patient as not needing extra (expensive) care and the most important parameter in that decision was her postal code/address, you know immediately that there is something terribly wrong.

This is not so far fetched a scenario as it might appear.

Not too long ago, @Clive wrote up a nice intro to decision making methods and part of that piece included a section on “how medical people make decisions”.

It was very timely as my spouse was dying at the time and I was attempting to battle my way through various versions of the above “medical classifications’.

One problem:

  • You won’t know if a decision is being made on your postal code/address

There’s no advanced notice of any or all the points of a AI/ML failure. In the case of a miss identified white duck, white goose, white swan, white pelican, it may not matter a great deal.

However, when the AI lists a set of procedures, protocols, medications or the lack of procedures, lack of protocols, lack of medications, this leaves folks with nothing on which to “use personal judgement” as to the effectiveness of the suggested or restricted treatments.

disclosure:

I went through a lot of this and had mixed results navigating the system on behalf of my spouse. Sometimes I guessed correctly, only to later find out that that guess increased the likelihood of a terminal event.

I live in a restricted access to health care area within the USA. It isn’t that we don’t have hospitals and medications and high tech treatments, here. It is that they ARE restricted on postal codes and addresses. I know this from my COVID-19 and The Bank of Mom and Dad research. I know this because there are some MDs who will tell you up front and straight up, that a treatment or medication is not going to be available to you because of where you live. And some MDs will not only tell you that you aren’t going to get medication or treatment, they will also tell you that it’s due to “Evidence Based Medicine”.

Increasingly, Evidence Based Medicine (aka least cost medicine) is using AI/ML to justify treatment and medication protocols. There isn’t any way for the average person to object, to know what the pros-cons are, or have any inputs or challenges at all.

This isn’t a duck. It isn’t a white duck. It’s a white sheet and a body bag.

Emoya June 2, 2022 11:34 AM

@All

Anyone who has ever studied or developed algorithms knows that there is a VAST difference between the design of one that works for only a specific scenario with controlled input and a general one that functions correctly with manifold parameter values. Thus far, AIs have been able to (statistically, as there are always outliers) satisfy the former, but are employed in situations necessitating the latter.

One of the fundamental misconceptions by the general public is that AI can reliably make similar decisions as humans, specifically in simple situations. Each one of us lives in our own reality fabricated based solely on our personal experiences and consumed information, analogous to an AI which is trained and evolves based upon said information and experiences. The glaring difference is that humans are equipped to decide for themselves how much weight to attribute to new experiences and information when making future decisions. Even then, we still sometimes have difficulty applying that knowledge appropriately.

If a child has a traumatic or otherwise impactful experience, it will likely “train” them to make future decisions that may not make complete sense to others. Most people don’t realize how significant a role early development plays on their everyday decisions. However, fears, phobias, biases, and other learned responses can be overcome through substantial effort once they are identified, even though a residual may remain indefinitely. Also, as adults, many of us encounter situations/events, learn new information, or are exposed to new perspectives by others, which have a profound effect on how we perceive reality, and therefore adjust our decision-making accordingly. Our capacity to identify deficiencies, struggle toward improvement, and have these reality-altering moments (i.e. metacognition) is foundational to our ability to adjust our thinking in later circumstances.

The current state of ML technology is not capable of this. Once a bias is introduced, good luck training it away. No single bit of new information can have a meaningful impact on an already trained AI. They “learn” to produce desired results to satisfy the statistical majority of scenarios, but are incapable of “reason” when presented with a deviating set of data. This is exacerbated by the trainers’ inability to foresee every possible situation, thereby omitting potentially critical information during training. One of the driving forces behind AI adoption is to somehow make decisions regarding these overlooked scenarios, as it alleviates the onus of the system designers/implementers to do so themselves.

Unfortunately, most of the population views new technologies (e.g. AI, cryptocurrency/blockchain, smartphones, apps) as “cool” or convenient and jumps on the bandwagon without pausing to consider how they function, accomplish their results, or with any consideration for their potential abuse or the bigger picture. When it comes to Big Tech and Govt, every “improvement” is a slippery slope with a potentially hidden agenda pursuing their own growth with little-to-no regard for others.

Emoya June 2, 2022 1:34 PM

Another thought…

I’m no psychologist, but it follows to reason that the human ability to assign individual weight to experiences and information is ultimately guided by self-preservation.

Early social units were minuscule by today’s standards, and an individual’s well-being was a direct extension of that of the group as a whole.

An individual may choose to care for another who is sick, having either the experience of being sick themselves, or with the foresight to consider they may become sick in the future, and had/may need someone to help care for them, hence empathy.

One may choose to sacrifice something of their own for the benefit of another because that other individual filled a necessary role for the survival of the group, hence charity.

The goal of everything human, including actions considered unselfish, is fundamentally rooted in self-preservation, and this subconsciously encourages us to “stack the cards in our favor” by placing more importance on things that we believe will improve our chances for a longer or better life.

The only thing that promotes an AI’s decision is an abstract “reward” for responding favorably to sample data during training.

Winter June 2, 2022 5:00 PM

@Emoya

Thus far, AIs have been able to (statistically, as there are always outliers) satisfy the former, but are employed in situations necessitating the latter.

Modern, deep learning, AI is not a “computer program” in the common sense. These machine learning systems build statistical models of the input examples.

I do not think statistical models behave like software programs. The two important questions are,
1) How does the model interpolate between known input data

2) How does the model extrapolate to outside the space of known input

In general, deep learning models tend to fail disastrously in 2), when extrapolating (ie, giving random answers).

This is why people who have to implement AI in critical situations don’t.

name.withheld.for.obvious.reasons June 2, 2022 9:31 PM

@ Clive

Thank you for the sentiment and kind words–and a good old joke. Will the Phillies ever win another series (I hate to call it world like)?

Busy with some forensic analysis, you know tracking down the creatives with a sinister heart. This group left a few bread crumbs on the floor, keyboards, and PDU’s. I betting more is to be found in the parking garage. What about you?

And as the U.S. tracks it contemporary efforts to a parallel somewhere in the 4th century, I maintain my composure here in the 21st century. I think we could all use a bit of karmic or ethical justice at some point–or what is the point. Starting to get really pissed at the designers of this universe, seemed to have failed miserably. Wonder what it takes to reboot a universe…

Clive Robinson June 3, 2022 1:41 AM

@ name.withheld…,

Re : Wonder what it takes to reboot a universe…

Not sure about “reboot” but start a new one…

Well the alledged “good book” that paraphrases all, says just a word, but then some theoretical physicists were talking about how you might go about it not that long ago in a way that said they thought they were close[1]. Some others however thought that there was a chance they might stop this one over on the Swiss French boarder…

Others believe there is an infinite number of universes stacked in the same space like a multidimensional swiss cheese.

Me and I guess many other of the infinite number of me’s they mentioned kind of gave up trying to follow the arguments 😉

[1] I must admit theoretical physics time estimations, have in the past been marginally better than those of the Hard AI guys. The physicists have apparently found their “God Particle” hiding behind a curtain without needing a Yellow brick road… However, those Hard AI guys have been saying things for more than half a century and admittedly they sound more and more like the old joke about the third husband being a Software Developer with the punchline of “And they have been sitting at the foot of the bed saying how wonderfull it will all be, when the just sort out one little problem” 😉

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.