The story appears on

Page A7

March 17, 2017

GET this page in PDF

Free for subscribers

View shopping cart

Related News

Home » Opinion » Book review

Don’t be duped by ‘objective’ data

THE “weapons of math destruction” alluded to are today’s omnipresent mathematical algorithms that use the voluminous personal data derived from individuals’ home addresses, occupations, purchasing history, and on-line activities in order to identify which of us are likely to be, for example, the best employees, successful students, or safe drivers.

While some of the uses of algorithms are relatively harmless — such as when various businesses use data yielded by our past purchases to better target likely future buyers — others can actually harm people, either because the data used is faulty in some way or, equally likely, the algorithms draw erroneous conclusions.

This effect was not the intent of the developers. Rather, they sought to use the tremendous data-processing abilities of computers to more efficiently absorb and digest mounds of data so that human beings could be relieved of drudgery and make better decisions. Under-staffed human resource offices breathed sighs of relief, for not only could algorithms comb through hundreds of documents in a fraction of the time it would take a human reader, but they would also be free of any human bias.

Alas, however, the “objectivity” of computers is illusory, for it is human beings who create the algorithms. In so doing, they must decide which data to use (or ignore), and which is more important (or less so). Dr. O’Neill has found numerous examples of unintended bias in the final products of these decisions.

Do data tell the truth?

The problem stems, in part, from our conviction that data — like facts — do not lie. They may not lie, but they do not always “tell” us what we think they do. For example, the use of data about a person’s financial or credit history.

While it is logical that a bank evaluating a person’s request for a loan would want to know about the applicant’s demonstrated reliability in repaying, it is not so clear why that information is useful in assessing the worth of a job applicant or of a student seeking admission to college.

The error is in assuming that one’s credit history yields dependable information about the kind of employee or student the applicant may be. But it does not and cannot. That kind of information can only be gleaned by scrupulously studying previous employment or student records and, more importantly, learning what that person’s former work or school colleagues felt about them.

That kind of data, moreover, takes additional valuable time to gather and also relies upon information filtered through the consciousness of another.

To avoid that, algorithms are used that contain a heck of a lot of objective data in addition to credit history, for example, one’s personal address, whether a person owns or rents their dwelling, your major “likes” or activities as revealed through social media, the websites visited, even political views, all of which seems to yield a portrait of “you.” And, while this is certainly a lot of “information,” how useful it actually is in assessing a person’s abilities to perform and grow is much less certain.

Further, since the formulas these algorithms use are carefully guarded as proprietary data, it is almost impossible to identify who is rewarded and who are disadvantaged. This leaves many citizens in the dark as to why their application for a job, school, or loan was refused, or as to why they were never called in for an interview.

Through her study, Dr O’Neill concludes that our increasing reliance upon computer sorting inevitably works against the poor, the young, and the less well-connected persons in society.

They are precisely the ones whose financial status, for example — or presumed financial status as determined from the neighborhood where you live — may well cause them never to be called in for a personal interview. On the other hand, those who are wealthier and/or are better connected will always gain personal attention when they seek loans, membership, or participation.

Another prominent way in which such data is used is in assessing teaching performance through the use of standardized testing.

The idea seems reasonable: by having all students take the same tests we will have concrete evidence of where they rank in area knowledge. This can then be used to tailor instruction and provide appropriate additional assistance.

Unfortunately, though, not only does this not tell us which students are potentially the best students, it also is not even necessarily an accurate snapshot in time. After all, there are many reasons why students may now have done well on their test: they might have been ill, or were hungry (an increasing problem in American schools), or had just endured some kind of trauma, or — a widespread factor — happened to be a member of a language-challenged family.

Moreover, these kind of standardized tests are used to assess teacher performance as well, for they can be used to see which teachers are having the most positive impact on their students. It seems self-evident that, if a student does better under teacher A this year, teacher A is a “good” teacher but, if the student performs less well under teacher B another year, then teacher B is a “failing” teacher.

This is yet another example of presuming we understand what the data tells us. But, teachers can quickly adapt to this process by ensuring that their students consistently “score” well, either by coaching them to the test (which is not the same as “teaching”) or through careful adjustment of the test results. Under this kind of “testing” the results are useless.

Hurting good teachers

But testing can hurt good teachers.

Consider this scenario: in year A students are taught/coached by a teacher who is intent upon producing high-test scores. The very next year (year B), these same students proceed to a class taught by a teacher who attempts to truly engage them in the materials, but without gaming the system. When administrators subsequently look over the test results they will believe that the data “clearly shows” that the year B teacher is the poorer teacher. Dr. O’Neil discusses specific persons to whom this process cost them their jobs, even though students and parents rated them highly.

Furthermore, since algorithms lack feedback mechanisms that reveal whether they are working as intended, any harm they may be causing — in overlooking promising candidates or dismissing good performs — remains unknown and uncorrected.

Since there is constant pressure to downsize, become more efficient, and trim costs, it is inevitable that many industries — especially those engaged in providing insurance — will turn to algorithms intended to single out poorer risks and, once identified, either charge them higher premiums or deny them service entirely.

Fascinating how our wish to avoid human judgment is, in fact, yielding the kind of unhappy, often inaccurate and discriminating, results that can only be rectified through human re-intervention. While such decisions are certainly less “hard” than data points, they can be immensely more informed, more compassionate, and less likely to intentionally harm.

 

The author is a retired statesman from the US.




 

Copyright © 1999- Shanghai Daily. All rights reserved.Preferably viewed with Internet Explorer 8 or newer browsers.

沪公网安备 31010602000204号

Email this to your friend