Fuzzy little things that I find interesting.

Political musings from someone who thinks the S-D curve is more important to politics than politicians.

Month: February, 2018

An easy to understand description of what the Heisenberg Uncertainty Principle is.

This has got to be one of the best explanations of the Heisenberg Uncertainty Principle actually is. Twenty minutes, and it’s absolutely worth watching.

Metrics often do not tell the story.

NY Times Op-Ed: The Misguided Drive To Measure ‘Learning Outcomes’

Without thoughtful reconsideration, learning assessment will continue to devour a lot of money for meager results. The movement’s focus on quantifying classroom experience makes it easy to shift blame for student failure wholly onto universities, ignoring deeper socio-economic reasons that cause many students to struggle with college-level work. Worse, when the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education. …

Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning.

That’s how we produce the critical thinkers American employers want to hire. And there’s just no app for that.

The idea that we can apply metrics to everything has also engulfed the health care industry–gathering metrics which create perverse incentives, such as the incentive not to help the truly sick, because they tend to die at a faster rate:

Metrics and Their Unintended Consequences

In December, doctors at a VA hospital in Oregon decided to admit an 81-year-old patient. He was dehydrated, malnourished, plagued by skin ulcers and broken ribs — in the medical professionals’ opinion, he was unable to care for himself at home. Administrators, however, overruled them.

Was there no bed for this poor man? No, the facility had plenty of beds; in fact, on an average day, more than half of the beds are empty, awaiting patients. Was there no money or medicine to care for him? No, and no. Reporting by the New York Times suggests that Walter Savage was, perversely, turned away because he was too sick. Very sick patients tend to worsen the performance measures by which VA hospitals are judged.

If this had happened in isolation, we could simply gape at the monstrosity that bureaucracies are occasionally capable of.

But such examples abound in health care. For example, in the 1990s, New York and Pennsylvania started publishing mortality data on hospitals and surgeons who did coronary bypasses. The idea was that more informed consumers would steer themselves toward the teams with the better statistics — theoretically good for patients, bad for slacking providers. The reality was less ideal: In those states, surgeons seem to have started doing more operations on healthier patients, while turning away the sickest ones who might otherwise have benefited.


Back in the late 1980’s when I worked for the Jet Propulsion Laboratory, I heard a story about how software development metrics were (then) slowly engulfing the computer industry. In an industry which cranks out lines of code, you’d think the ability to measure productivity could be measured easily–by counting lines of code produced.

But how do you differentiate comments from code?

Simple. In the C programming language, most statements in C are terminated by a semicolon. Typical code may look like:

void append(int num)
{
    struct node *temp,*right;
    temp= (struct node *)malloc(sizeof(struct node));
    temp->data=num;
    right=(struct node *)head;
    while(right->next != NULL)
    right=right->next;
    right->next =temp;
    right=temp;
    right->next=NULL;
}

It doesn’t matter if you understand this or not. Just note at the end of every statement you see a semicolon.

So one metric: since everyone checks in their work into a central source database which tracks which person made which change, you can easily build a script to scan the source database, find all the work checked in by each of your programmers, and count the semicolons.

But it created some perverse incentives. Like the guy who build a state machine (that is, a piece of code whose behavior is driven by a huge table) as part of his code. He spent a week building that table–but tables don’t contain semicolons. They only contain commas.

He was–as the (apocryphal) story goes, dinged by management for the decline in productivity, despite the fact that his solution was compact and extremely tricky–consisting as it did of 500 text lines full of nothing but numbers, each number picked to express a certain behavior.

Some companies (inadvertently) counted lines of code removed against you–which incentivized code bloat, and incentivized keeping bad code in favor of rewriting it with correct code.

And the very process itself encouraged thoughtless speed over thoughtful contemplation when solving a problem–which lead to a lot of poor quality crap being checked in.

All problems which we continue to live with today–long after counting semicolons was replaced by other faddish–and poorly thought out–management techniques, all designed to turn the thoughtful contemplation and problem solving of computer software development into a factory job with workers on an assembly line.


All of this is to say that metrics are hard, and they cannot tell the entire story.

And measuring them creates perverse incentives–such as the perverse incentive created by college rankings which weighed in non-education related metrics, such as “student satisfaction” which tends to be driven as much by things like how nice the dorm rooms are and the quality of student facilities such as swimming pools or gym access. Those perverse incentives have helped to drive up the cost of a college education, by turning many schools (who are seeking higher rankings) into a four year resort vacation with classrooms.

Or, the perverse incentive of hospitals turning away the sick, out of fear it will tank their metrics and–as a result, thanks to Obamacare–cause the government to lower or cut off Medicare/Medicaid payments to the hospital entirely.

Someone else’s comment in response to “the CDC is not permitted to investigate gun violence.”

I don’t know the accuracy of the quoted statement, but I’m pasting it here for future recall and so I can investigate its accuracy.

Source.

The Dickey Amendment states-“None of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control.”

This does not “ban” the CDC from researching gun violence. CDC was not banned from doing the research. In fact, CDC articles pertaining to firearms have held steady since the defunding, and even increased to 121 in 2013.

CDC very recently released a 16-page report that was commissioned by the city council of Wilmington, Delaware, on factors contributing to its abnormally high gun crime, and methods of prevention. The study weighed factors such as where the guns were coming from, the sex of the offenders, likeliness of committing a gun crime, and how unemployment plays a factor. In other words it studied, the environment surrounding the crime. It’s purpose was to prevent biased advocacy on political views, which is something that cannot be included in research by definition.

In the late ’80s and early ’90s, the CDC was openly biased in opposing gun rights. CDC official and research head Patrick O’Carroll stated in a 1989 issue of The Journal of the American Medical Association, “We’re going to systematically build a case that owning firearms causes deaths.” His successor and director of the CDC National Center of Injury Prevention branch Mark Rosenberg told Rolling Stone in 1993 that he “envisions a long term campaign, similar to tobacco use and auto safety, to convince Americans that guns are, first and foremost, a public health menace.” He went on to tell the Washington Post in 1994 “We need to revolutionize the way we look at guns, like what we did with cigarettes. It used to be that smoking was a glamour symbol — cool, sexy, macho. Now it is dirty, deadly — and banned.”

CDC leaders were not shy about their intentions of banning guns from the public. Sure enough, they acted on their desires. In October 1993, The New England Journal of Medicine released a study funded by the CDC to the tune of $1.7 million, entitled “Gun Ownership as a Risk Factor for Homicide in the Home.” The leader author was Dr. Arthur Kellermann, an epidemiologist, physician, and outspoken advocate of gun control.

In the study, Kellerman concluded that people who kept guns in their homes were 2.7 times more likely to be homicide victims as people who don’t. Major media outlets, such as the New York Times, still cite these statistics.

However, the research was beyond flawed. For one, Kellermann used epidemiological methods in an attempt to investigate an issue dealing with criminology. In effect, this means he was treating gun violence the same as, say, the spread of West Nile, or bird flu.

It provided no proof or examples that the murder weapon used in these crimes belonged to the homeowner or had been kept in that home.

Furthermore, the gun victims he studied were anomalies. They were selected from homicide victims living in metropolitan areas with high gun-crime statistics, which completely discounted the statistical goliath of areas where gun owners engage in little to no crime.

Other factors that lent to the study’s unreliability were: It is based entirely on people murdered in their homes, with 50 percent admitting this was the result of a “quarrel or romantic triangle,” and 30 percent said it was during a drug deal or other felonies such as rape or burglary; it made no consideration for guns used in self-defense; it provided no proof or examples that the murder weapon used in these crimes belonged to the homeowner or had been kept in that home.

These problems prompted objections and questions from leading scientists in the field of criminology, such as Yale University professor John Lott, Florida State’s Gary Kleck, and University of Massachusetts sociology professors James D. Wright and Peter H. Rossi. Their research had come to vastly different conclusions, and they found the methodology unsound.

As Lott says of Kellermann’s study in his book, “More Guns, Less Crime”: To demonstrate this, suppose that we use the same statistical method—with a matching control group—to do a study on the efficacy of hospital care. Assume that we collect data just as these authors did, compiling a list of all the people who died in a particular county over the period of a year. Then we ask their relatives whether they had been admitted to the hospital during the previous year. We also put together a control sample consisting of neighbors who are part of the same sex, race, and age group. Then we ask these men and women whether they have been in a hospital during the past year. My bet is that those who spent time in hospitals are much more likely to have died — quite probably a stronger relationship than that between homicides and gun ownership in Kellerman’s study. If so, would we take that as evidence that hospitals kill people? He summarized, “it’s like comparing 100 people who went to a hospital in a given year with 100 similar people who did not, finding that more of the hospital patients died, and then announcing that hospitals increase the risk of death.”

The final nail in the coffin came in 1995 when the Injury Prevention Network Newsletter told its readers to “organize a picket at gun manufacturing sites” and to “work for campaign finance reform to weaken the gun lobby’s political clout.” Appearing on the same page as the article pointing the finger at gun owners for the Oklahoma City bombing were the words, “This newsletter was supported in part by Grant #R49/CCR903697-06 from the Centers for Disease Control and Prevention

I’m fine with the CDC studying it like they do now, as long as the requirement to study it unbiasedly is still there. Do we really want government agencies “researching” topics to come to a predetermined finding? If we change a few words from the quotes that precipitated the “ban” would we be against it?

‘In the late ’80s and early ’90s, the CDC was openly biased in opposing gay rights. CDC official and research head Patrick O’Carroll stated in a 1989 issue of The Journal of the American Medical Association, “We’re going to systematically build a case that homosexuality causes AIDS deaths.” His successor and director of the CDC National Center of Injury Prevention branch Mark Rosenberg told Rolling Stone in 1993 that he “envisions a long term campaign, similar to tobacco use and auto safety, to convince Americans that gays are, first and foremost, a public health menace.” He went on to tell the Washington Post in 1994 “We need to revolutionize the way we look at homosexuals, like what we did with cigarettes.

One reason why the U.S. Government was able to raise sufficient taxes simply by taxing alcohol…

NewImage

More information can be found here. Basically at his farewell party it is believed 55 people managed to consume thousands of dollars (inflation adjusted) of alcohol at a tavern in Philadelphia.

Harsh.

We are now officially living in the crazy years.

Students in Louisiana thought this math symbol looked like a gun. Police were called.

Obviously the square root sign is a gun.

At least we can now finally eliminate math from the curriculum of most schools, as it is an invention of the white male patriarchy. (Sarcasm)

Wait, this is a thing?

Headline: China wages war on funeral strippers

My take: wait, rural Chinese have strippers at funerals?

The world is getting better and better in nearly every imaginable way. And we have no idea.

The media exaggerates negative news. This distortion has consequences

The data scientist Kalev Leetaru applied a technique called sentiment mining to every article published in the New York Times between 1945 and 2005, and to an archive of translated articles and broadcasts from 130 countries between 1979 and 2010. Sentiment mining assesses the emotional tone of a text by tallying the number and contexts of words with positive and negative connotations, like good, nice, terrible, and horrific.

Putting aside the wiggles and waves that reflect the crises of the day, we see that the impression that the news has become more negative over time is real. The New York Times got steadily more morose from the early 1960s to the early 1970s, lightened up a bit (but just a bit) in the 1980s and 1990s, and then sank into a progressively worse mood in the first decade of the new century. News outlets in the rest of the world, too, became gloomier and gloomier from the late 1970s to the present day.

The consequences of negative news are themselves negative. Far from being better informed, heavy newswatchers can become miscalibrated. They worry more about crime, even when rates are falling, and sometimes they part company with reality altogether: a 2016 poll found that a large majority of Americans follow news about Isis closely, and 77% agreed that “Islamic militants operating in Syria and Iraq pose a serious threat to the existence or survival of the United States,” a belief that is nothing short of delusional.

So yes, we hear a lot of negative news. But is the world getting better?

In fact, it is, in neary every measurable way.

And yet the facts show otherwise. In a powerful study entitled “The short history of global living conditions and why it matters that we know it” by Max Roser, an economist at the University of Oxford and the founder of Our World in Data, we learn that on virtually all of the key dimensions of human material well-being—poverty, literacy, health, freedom, and education—the world is an extraordinarily better place than it was just a couple of centuries ago.

We are not on the edge of the precipice, waiting to fall into the abyss. We are not on the edge of disaster, waiting for the collapse of civilization. America is not in it’s final glory days on its inevitable decline into the darkness. The world is not on the edge of destruction with more than 7 billion miserable souls whose only release is death.

No; the world is slowly–inching, in fits and starts–towards a more utopian existence, where poverty is extinct, hunger is eliminated, everyone has access to fresh water, full literacy, good health care. A world where every man and woman on the earth is free to choose their own path, doing them full justice in realizing their full potential as human beings.

It’s a damned shame we don’t seem to appreciate it.

In fact, when asked “All things considered, do you think the world is getting better or worse, or nether getting better or worse?” In Sweden 10% thought things are getting better, in the US they were only 6%, and in Germany only 4%.

As Hans Rosling used to quip, a chimpanzee can do better randomly picking from the three choices.

“Better a broken bone than a broken spirit.”

The case for the “Self-Driven Child”

Stixrud: We know that a low sense of control is highly associated with anxiety, depression, and virtually all mental health problems. Researchers have found that a low sense of control is one of the most stressful things that people can experience. And since the 1960’s, we’ve seen a marked rise in stress-related mental health problems in children and adolescents, including anxiety, depression, and self-harm. Just in the last six or seven years, there has been an unprecedented spike in the incidence of anxiety and depression in young people.

Research on motivation has suggested that a strong sense of autonomy is the key to developing the healthy self-motivation that allows children and teens to pursue their goals with passion and to enjoy their achievements.

The interview is worth reading. Essentially children today have fewer freedoms than even when I was a child, and the successful ones are being driven to anxiety by well-meaning parents. Throw in a touch of social media-driven pressures to look successful, and you have the recipe for a neurotic generation that makes Woody Allen look normal.

Well, this is what happens when you attempt to criminalize loutish behavior.

#MeToo is ruining the dating scene

My favorite line, by the way, is this:

So what if men are scared and confused? For ages, sex has held heavier consequences for women. Perhaps we are just getting closer to gender parity, to a place where women’s desires in sex matter as much as men’s. “Nothing is going to change with men until we hold them to a higher standard,” says Jaclyn Friedman, a sex educator and author of “Unscrewed: Women, Sex, Power and How to Stop Letting the System Screw Us All.”

When you consider that, according to these feminists men still hold significant economic power in the so-called “patriarchy”, and you consider further that many people get ahead by being mentored by someone who is older and farther along in their careers–well, couldn’t this “fuck men; they should be as uncomfortable as us!” attitude… well, backfire?

Indeed it has:

After #MeToo hysteria, men just saying No to mentoring women

Is that sexist? Or self-preservation in a world where 1/3rd of all respondents in the United States believed a man complementing a woman who was not his sexual partner on her appearance constituted sexual harassment?

It’s gotten so bad that a number of folks on the left have started to notice:


The problem is, there are a lot of actions some men have taken that is genuine sexual harassment–and that sort of behavior richly deserves to be nuked from orbit.

But there are plenty of other actions which are uncouth, loutish, obnoxious and boorish–and certainly deserve to be pointed out as such–but which does not rise to the legal definition of sexual harassment, defined as “bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors.”

Meaning if I said “nice tits, toots”–I’m certainly uncouth, boorish and an asshole who needs to be told off. Certainly that sort of boorish and–more importantly–unprofessional behavior does not deserve to exist in the workplace, and someone who regularly engages in such behavior needs to be put on notice for acting in an unprofessional manner.

But how is “nice tits, toots” bullying or coercion, unless we take it as given that women are so weak, so fragile, so unable to defend themselves that “nice tits, toots” is the equivalent of “you either fuck me after work or I’m going to fire you on the spot”?

In other words, I would much prefer if we acknowledge a category of behaviors–uncouth, uncultured, obnoxious, assholishness–that are certainly unwanted and undesired, behaviors that certainly deserve a reprimand at work, but which does not rise to the level of criminal behavior, as sexual harassment does.

Otherwise, we’re left with a generation of men who are uncertain if they are criminals:

Even the guys like Knight who are pretty sure they are not harassers are walking on eggshells.

Look; if you are not sure you’re not a sexual harasser–that is, if you are not sure that a poorly thought-out statement or an apparently unwanted touch rises to criminal behavior–then there is a real god-damned problem.