Information Processing

Pessimism of the Intellect, Optimism of the Will     Archive   Favorite posts   Twitter: @steve_hsu

Monday, November 02, 2015

Houellebecq on Tocqueville, Democracy, and Nietzsche

I prefer good literary criticism.




But this is not it:



Beyond some trivialities, the discussants make no progress toward the question that fascinates all of them: what is Michel Houellebecq really thinking? But they cannot conceive it because their conditioning is so strong that the thoughts cannot enter their minds. (Note that, in its favor, the panel includes Soumission translator Lorin Stein.)

Much better, and shorter, this video of Houellebecq on Tocqueville, Democracy, and Nietzsche.



Tocqueville (Democracy in America, chapter 6): ... It would seem that if despotism were to be established among the democratic nations of our days, it might assume a different character; it would be more extensive and more mild; it would degrade men without tormenting them. I do not question that, in an age of instruction and equality like our own, sovereigns might more easily succeed in collecting all political power into their own hands and might interfere more habitually and decidedly with the circle of private interests than any sovereign of antiquity could ever do. But this same principle of equality which facilitates despotism tempers its rigor. ...

Democratic governments may become violent and even cruel at certain periods of extreme effervescence or of great danger, but these crises will be rare and brief. ... I have no fear that they will meet with tyrants in their rulers, but rather with guardians.1

I think, then, that the species of oppression by which democratic nations are menaced is unlike anything that ever before existed in the world; our contemporaries will find no prototype of it in their memories. I seek in vain for an expression that will accurately convey the whole of the idea I have formed of it; the old words despotism and tyranny are inappropriate: the thing itself is new, and since I cannot name, I must attempt to define it.

I seek to trace the novel features under which despotism may appear in the world. The first thing that strikes the observation is an innumerable multitude of men, all equal and alike, incessantly endeavoring to procure the petty and paltry pleasures with which they glut their lives. Each of them, living apart, is as a stranger to the fate of all the rest; his children and his private friends constitute to him the whole of mankind. As for the rest of his fellow citizens, he is close to them, but he does not see them; he touches them, but he does not feel them; he exists only in himself and for himself alone; and if his kindred still remain to him, he may be said at any rate to have lost his country.

Above this race of men stands an immense and tutelary power, which takes upon itself alone to secure their gratifications and to watch over their fate. That power is absolute, minute, regular, provident, and mild. It would be like the authority of a parent if, like that authority, its object was to prepare men for manhood; but it seeks, on the contrary, to keep them in perpetual childhood: it is well content that the people should rejoice, provided they think of nothing but rejoicing. For their happiness such a government willingly labors, but it chooses to be the sole agent and the only arbiter of that happiness; it provides for their security, foresees and supplies their necessities, facilitates their pleasures, manages their principal concerns, directs their industry, regulates the descent of property, and subdivides their inheritances: what remains, but to spare them all the care of thinking and all the trouble of living? ...

After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd.

I have always thought that servitude of the regular, quiet, and gentle kind which I have just described might be combined more easily than is commonly believed with some of the outward forms of freedom, and that it might even establish itself under the wing of the sovereignty of the people.

Our contemporaries are constantly excited by two conflicting passions: they want to be led, and they wish to remain free. As they cannot destroy either the one or the other of these contrary propensities, they strive to satisfy them both at once. They devise a sole, tutelary, and all-powerful form of government, but elected by the people. They combine the principle of centralization and that of popular sovereignty; this gives them a respite: they console themselves for being in tutelage by the reflection that they have chosen their own guardians. Every man allows himself to be put in leading-strings, because he sees that it is not a person or a class of persons, but the people at large who hold the end of his chain.

By this system the people shake off their state of dependence just long enough to select their master and then relapse into it again. ...
See also Neoreaction and the Dark Enlightenment.

Sunday, November 01, 2015

David Donoho interview at HKUST



A long interview with Stanford professor David Donoho (academic web page) at the IAS at HKUST.

Donoho was a pioneer in thinking about sparsity in high dimensional statistical problems. The motivation for this came from real world problems in geosciences (oil exploration), encountered in Texas when he was still a student. Geophysicists were using Compressed Sensing long before the rigorous mathematical basis was established.

The figure below, from the earlier post Compressed Sensing and Genomes, exhibits the Donoho-Tanner phase transition.
For more discussion of our recent paper The human genome as a compressed sensor, see this blog post by my collaborator Carson Chow and another on the machine learning blog Nuit Blanche. One of our main points in the paper is that the phase transition between the regimes of poor and good recovery of the L1 penalized algorithm (LASSO) is readily detectable, and that the scaling behavior of the phase boundary allows theoretical estimates for the necessary amount of data required for good performance at a given sparsity. Apparently, this reasoning has appeared before in the compressed sensing literature, and has been used to optimize hardware designs for sensors. In our case, the sensor is the human genome, and its statistical properties are fixed. Fortunately, we find that genotype matrices are in the same universality class as random matrices, which are good compressed sensors.

The black line in the figure below is the theoretical prediction (Donoho 2006) for the location of the phase boundary. The shading shows results from our simulations. The scale on the right is L2 (norm squared) error in the recovered effects vector compared to the actual effects.


From Donoho's autobiographical sketch, provided for the Shaw Prize:
During 2004-2010, Jared Tanner and I discovered the precise tradeoff between sparsity and undersampling, showing when L1-minimization can work successfully with random measurements. Our work developed the combinatorial geometry of sparse solutions to underdetermined systems, a beautiful subject involving random high-dimensional polytopes. What my whole life I thought of privately as ‘non-classical’ mathematics was absorbed into classical high-dimensional convex geometry. [ Discussed at ~ 1:38 in the video. ]
More about John Tukey, Donoho's undergraduate advisor at Princeton.

Dollar Empire

This speech emphasizes an under-recognized motivation for US adventurism abroad: local military and geopolitical conflicts enhance the strength of the US dollar as a reserve currency in the face of global volatility. The essay is long but worth reading as it gives a fresh look at superpower competition across multiple arenas, and some insight into the Chinese worldview. However, I think the general overestimates the level of long term thinking and financial-economic-geopolitical-military coordination within US leadership.
One Belt, One Road

General Qiao Liang's speech, which we've been allowed to publish, was delivered at the University of Defense, China’s top military school. It casts a light on China’s new strategic thinking.  
... the August 15, 1971 decoupling of dollar and gold. Since then, humanity saw the emergence of a financial empire, and this financial empire took all of the humanity race into its financial system. In fact, the so-called dollar leadership began at this moment. Today it is about 40 years old. After that day, we entered into an era of real paper notes, but behind the dollar there is no longer a precious metal—it uses entirely the government's credibility and support from all over the world to gain profits. Simply put, the Americans can use a piece of printed green paper to get physical wealth from all over the world. We never had such a thing in human history. There were a lot of ways to make profits in human history, sometimes with money exchange, sometimes by using gold or silver; at other times countries used war to gain plunders, but the cost of war remained enormous. But after the appearance of the dollar as simply a green paper, the cost-benefit ratio for the United States we can say became extremely low.

... The reason is very simple. Because in order to control the world, the United States needs the world to use dollars. In order to let the world use dollars, the Americans made a very clever move in 1973: they linked the dollar and oil by forcing the leading OPEC country, Saudi Arabia, to conduct its global oil transactions in dollars. If you understand that global oil transactions are in US dollars, you can understand why the Americans fight for oil. A direct consequence of war in the oil-producing countries is the surge in oil prices, and a surge in oil prices means that the demand for dollars increases. Before the war, for example, if you had $38, in theory, you can buy a barrel of oil from an oil company. With the war, oil prices have more than quadrupled, reaching $149. So, $38 is only enough to buy a quarter of a barrel of oil, and for the remaining 3/4 of the barrel you are short more than 100 dollars. What to do then? You can only go to the Americans with your own products and resources and hand them out in return for American dollars. And then the US government can confidently, openly, and justifiably print dollars. It is through war—war against the oil-producing countries, creating high oil prices—that the US creates a high demand for dollars.

The American war in Iraq had more than just one goal. It was also about maintaining the dollar leadership. Why then did George W. Bush insist on war in Iraq? Now we can very clearly that Saddam did not support terrorism or al-Qaeda, nor did he weapons of mass destruction—why was Saddam finally brought to the gallows? Because Saddam thought himself smart, and played with fire with superpowers. At the official launch of the euro in 1999, Saddam Hussein seized the opportunity to play with fire between the dollar and the euro—the United States and the European Union—and he could not wait to announce that the Iraqi oil transactions would occur in euros. This is what angered the Americans, in particular, it produced a chain reaction. Russian President Vladimir Putin, Iranian President Mahmoud Ahmadinejad, and Venezuelan President Hugo Chavez, also announced the settlement of their country's oil exports would be in euros. Was this not a stab in American backs? Some people think it is too far-fetched to say that after this war in Iraq was mandatory. Then please take a look at this: what did the Americans do after winning Iraq? Even before seizing Saddam, the Americans set up an Iraqi interim government whose first decree was to declare Iraqi oil exports would be accounted in dollars and not in euros. That's why Americans are fighting for dollars.

... On last year's "double 11" [November 11, Chinese Valentine day], online shopping reached 50.7 billion yuan in a day for Alibaba's Taobao. Over the three days after the Thanksgiving holiday, US online and on-the-ground store sales had a total equivalent to 40.7 billion yuan, less than Alibaba sales in one day. And China was not even counting Netease, Tencent, Jingdong, or revenue from malls. This means that a new era has already arrived, while the American reaction is still slow. Alibaba deals were all made directly with Alipay. What does direct pay mean? It means that the currency is already out of the transaction stage, and the American leadership is built on the dollar. What is the dollar? It is a currency. In the future, when we no longer use money, traditional money settlement will become useless. When money becomes useless, will an empire built on money still exist? That is the question to be considered by the Americans.

Major General Smedley Darlington Butler (USMC), author of War is a Racket:
WAR is a racket. It always has been. It is possibly the oldest, easily the most profitable, surely the most vicious. It is the only one international in scope. It is the only one in which the profits are reckoned in dollars and the losses in lives. A racket is best described, I believe, as something that is not what it seems to the majority of the people. Only a small 'inside' group knows what it is about. It is conducted for the benefit of the very few, at the expense of the very many.

I spent 33 years and four months in active military service and during that period I spent most of my time as a high class muscle man for Big Business, for Wall Street and the bankers. In short, I was a racketeer, a gangster for capitalism. I helped make Mexico and especially Tampico safe for American oil interests in 1914. I helped make Haiti and Cuba a decent place for the National City Bank boys to collect revenues in. I helped in the raping of half a dozen Central American republics for the benefit of Wall Street. I helped purify Nicaragua for the International Banking House of Brown Brothers in 1902-1912. I brought light to the Dominican Republic for the American sugar interests in 1916. I helped make Honduras right for the American fruit companies in 1903. In China in 1927 I helped see to it that Standard Oil went on its way unmolested. Looking back on it, I might have given Al Capone a few hints. The best he could do was to operate his racket in three districts. I operated on three continents.

Wednesday, October 28, 2015

Genetic group differences in height and recent human evolution

These recent Nature Genetics papers offer more evidence that group differences in a complex polygenic trait (height), governed by thousands of causal variants, can arise over a relatively short time (~ 10k years) as a result of natural selection (differential response to varying local conditions). One can reach this conclusion well before most of the causal variants have been accounted for, because the frequency differences are found across almost all variants (natural selection affects all of them). Note the first sentence above contradicts many silly things (drift over selection, genetic uniformity of all human subpopulations due to insufficient time for selection, etc.) asserted by supposed experts on evolution, genetics, human biology, etc. over the last 50+ years. The science of human evolution has progressed remarkably in just the last 5 years, thanks mainly to advances in genomic technology.

Cognitive ability is similar to height in many respects, so this type of analysis should be possible in the near future.

See discussion in earlier posts:
Height, breeding values and selection
Recent human evolution: European height
Eight thousand years of natural selection in Europe
No genomic dark matter
Population genetic differentiation of height and body mass index across Europe

Nature Genetics 47, 1357–1362 (2015) doi:10.1038/ng.3401

Across-nation differences in the mean values for complex traits are common1, 2, 3, 4, 5, 6, 7, 8, but the reasons for these differences are unknown. Here we find that many independent loci contribute to population genetic differences in height and body mass index (BMI) in 9,416 individuals across 14 European countries. Using discovery data on over 250,000 individuals and unbiased effect size estimates from 17,500 sibling pairs, we estimate that 24% (95% credible interval (CI) = 9%, 41%) and 8% (95% CI = 4%, 16%) of the captured additive genetic variance for height and BMI, respectively, reflect population genetic differences. Population genetic divergence differed significantly from that in a null model (height, P < 3.94 × 10−8; BMI, P < 5.95 × 10−4), and we find an among-population genetic correlation for tall and slender individuals (r = −0.80, 95% CI = −0.95, −0.60), consistent with correlated selection for both phenotypes. Observed differences in height among populations reflected the predicted genetic means (r = 0.51; P < 0.001), but environmental differences across Europe masked genetic differentiation for BMI (P < 0.58).



Height-reducing variants and selection for short stature in Sardinia

Nature Genetics 47, 1352–1356 (2015) doi:10.1038/ng.3403 
We report sequencing-based whole-genome association analyses to evaluate the impact of rare and founder variants on stature in 6,307 individuals on the island of Sardinia. We identify two variants with large effects. One variant, which introduces a stop codon in the GHR gene, is relatively frequent in Sardinia (0.87% versus <0.01% elsewhere) and in the homozygous state causes Laron syndrome involving short stature. We find that this variant reduces height in heterozygotes by an average of 4.2 cm (−0.64 s.d.). The other variant, in the imprinted KCNQ1 gene (minor allele frequency (MAF) = 7.7% in Sardinia versus <1% elsewhere) reduces height by an average of 1.83 cm (−0.31 s.d.) when maternally inherited. Additionally, polygenic scores indicate that known height-decreasing alleles are at systematically higher frequencies in Sardinians than would be expected by genetic drift. The findings are consistent with selection for shorter stature in Sardinia and a suggestive human example of the proposed 'island effect' reducing the size of large mammals.


Tuesday, October 27, 2015

Where men are men, and giants walk the earth

In this earlier post I advocated for cognitive filtering via study of hard subjects
Thought experiment for physicists: imagine a professor throwing copies of Jackson's Classical Electrodynamics at a group of students with the order, "Work out the last problem in each chapter and hand in your solutions to me on Monday!" I suspect that this exercise produces a highly useful rank ordering within the group, with huge differences in number of correct solutions.
In response, a Caltech friend of mine (Page '87, MIT PhD in Physics) sent this old article from the Caltech News. It describes Professor William Smythe and his infamous course on electromagnetism, which was designed to "weed out weaklings"! The article lists six students who survived Smythe's course and went on to win the Nobel prize in Physics. (Click for larger version.)

Vernon Smith, a "weakling" who deliberately avoided the course, went on to win a Nobel prize in Economics. Smith wrote
The first thing to which one has to adapt is the fact that no matter how high people might sample in the right tail of the distribution for "intelligence," ... that sample is still normally distributed in performing on the materials in the Caltech curriculum.
I remind the reader of the Page House motto: Where men are men, and giants walk the earth :-)


Note added: The article mentions George Trilling, a professor at Berkeley I knew in graduate school. I once wrote an electrodynamics solution set for him, and was surprised that he had the temerity to complain about one of my solutions 8-)

Sunday, October 25, 2015

Drone invasion



I bought one of these today for the kids -- their 10th birthday is coming up. Very fun to fly -- reminds me a bit of flying kites when I was a kid. At one point it got away from us and ended up across the street in a neighbor's tree -- the dreaded kite eating tree :-)



See also Drone Art.

Thursday, October 22, 2015

W-2's don't lie

These numbers are derived from aggregate W-2 incomes for 158 million working Americans (see link for full table).
The "raw" average wage, computed as net compensation divided by the number of wage earners, is $7,050,259,213,644.55 divided by 158,186,786, or $44,569.20. Based on data in the table below, about 67.2 percent of wage earners had net compensation less than or equal to the $44,569.20 raw average wage. By definition, 50 percent of wage earners had net compensation less than or equal to the median wage, which is estimated to be $28,851.21 for 2014.
Some rough earnings thresholds by percentile: 90th ~ $95k , 95th ~ $125k , 99th ~ $275k , 99.9th ~ $900k , 99.99th ~ $3.5M.

The Tragedy of Great Power Politics?



Both sides of this issue are well argued in the debate -- in particular by opponents John Mearsheimer and Kevin Rudd. See also The Tragedy of Great Power Politics.


Fear Not!

For relentless technological advance, powered by high g researchers, venture capitalists, capital markets, and government investment in basic research, continues to deliver a cornucopia of benefits to the average joe.


(Note, however, that Moore's Law itself has stalled out recently ...)

Wednesday, October 21, 2015

BBC interview with Robert Plomin

I recommend this BBC interview with Robert Plomin. Robert is a consummate gentleman and scholar, working in a field that inevitably attracts controversy. (Via Dominic Cummings.)
Professor Robert Plomin talks to Jim Al-Khalili about what makes some people smarter than others and why he's fed up with the genetics of intelligence being ignored. Born and raised in Chicago, Robert sat countless intelligence tests at his inner city Catholic school. College was an attractive option mainly because it seemed to pay well. Now he's one of the most cited psychologists in the world. He specialized in behavioural genetics in the mid seventies when the focus in mainstream psychology was very much on our nurture rather than our nature, and genetics was virtually taboo. But he persisted, conducting several large adoption studies and later twin studies. In 1995 he launched the biggest longitudinal twin study in the UK, the TED study of ten thousand pairs of twins which continues to this day. In this study and in his other work, he's shown consistently that genetic influences on intelligence are highly significant, much more so than what school you go to, your teachers or home environment. If only the genetic differences between children were fully acknowledged, he believes education could be transformed and parents might stop giving themselves such a hard time.

Monday, October 19, 2015

Men Are Easy



@9 min: 26 million matches per day on Tinder. Male preferences easy to predict, females more complex! Linear vs Multivariate Nonlinear preferences? Calling Geoffrey Miller ...

Some data from OKcupid:



Global Impact Initiative


MSU will be hiring over 100 new professors (beyond ordinary hiring such as retirement replacements), primarily in science and technology areas that address key global challenges. Priority areas include Computation, Advanced Engineering, Genomics, Plant Sciences, Food/Environment, Precision Medicine, and Advanced Physical Sciences. MSU total funding from the Department of Energy and the National Science Foundation ranks in the top 10 among US universities.

Proximate to my own field of theoretical physics, we intend to build one of the best lattice QCD groups in the US. I predict that in the coming decade lattice QCD applied to low-energy nuclear physics will allow first-principles (starting from the level of quarks and gluons) calculations of important dynamical quantities in nuclear physics, such as scattering amplitudes and reaction rates. For the first time, strongly coupled nuclear systems will become amenable to direct computation using the quantum field theory of quarks and gluons.
Three faculty positions in Lattice Quantum Chromodynamics

The Department of Physics & Astronomy (PA), National Superconducting Cyclotron Laboratory (NSCL), and a new department of Computational Math Science and Engineering (CMSE) invite applications from outstanding candidates for three faculty positions at Michigan State University in the area of computational Lattice Quantum Chromodynamics (LQCD). We anticipate filling one or more of the positions at a senior level with tenure. We are looking for candidates with an excellent record in applying large-scale computing to solving cutting-edge scientific problems in the domains of nuclear physics (relevant to the Facility for Rare Isotope Beams) and high energy physics. We expect that the three hires will work together to establish an internationally prominent and well-funded activity in LQCD and its applications to high energy and nuclear physics. These positions are part of a committed multi-year effort to build the computational sciences programs at Michigan State University. Each position will be a joint appointment between the new CMSE department and PA/NSCL. Faculty will have a primary appointment in one of the three participating units (PA, NSCL, CMSE), and we anticipate one appointment in each of these units. In addition to developing a world-leading research group with strong disciplinary and interdisciplinary collaborations, the new faculty members are expected to contribute to the development of an innovative curriculum in computational sciences, at both the graduate and undergraduate levels.

BTW, I almost cried when I saw this happen! Go Green!

Thursday, October 15, 2015

Mein Krieg: time and memory



The footage in this documentary will appeal to any History or WWII buff. The interviews with the old men, juxtaposed with moving images of their wartime youth, are a poignant meditation on time and memory.
Mein Krieg (1991)
Review/Film; Movies Shot By 6 Germans In the War

By JANET MASLIN (NYTimes)

The documentary "Mein Krieg" ("My Private War") offers stunningly un-self-conscious World War II memories from six German veterans, each of whom took a home movie camera with him into the fray. As directed with chilling simplicity by Harriet Eder and Thomas Kufus, it presents both a compilation of eerie wartime scenes and a catalogue of the photographers' present-day attitudes toward their experience. "I wouldn't be talking about these things if my conscience weren't clear as crystal," one of them calmly declares.

The film makers have their own ideas about their interviewees' complicity, as demonstrated by the emphasis they place on that particular remark. But their approach is restrained as they allow each of these six veterans to reminisce about everything from the condition of their movie cameras (which are well maintained and have yielded high-quality home movies) to the indelible sights they have seen. "Here we're going into Warsaw, and this is a tour of the buildings destroyed in '39," one man says, casually describing his images of wholesale destruction.

Much of the material seen here has a peculiar gentleness, as German soldiers cook and exercise and smile for the cameras. (There do not appear to have been restrictions on what the soldiers could photograph, since the later part of the film also includes glimpses of mass graves and civilian casualties.) And some of it recalls the more calculated wartime images we are more used to seeing in connection with Allied troops. So pretty nurses beam at Nazi soldiers; the soldiers' faces betray both fear and determination; the troops are seen celebrating after they shoot down an enemy plane. They were, a photographer recalls about the plane's dead Russian pilot, "full of joy over having been able to destroy this hornet." ...

Monday, October 12, 2015

Neoreaction and the Dark Enlightenment

An essay on neoreaction and the dark enlightenment from The Awl.

See also Fukuyama and Zhang on the China Model , Is there a China model? and Power and paranoia in Silicon Valley.
The Darkness Before the Right

A right-wing politics for the coming century is taking shape. And it’s not slowing down.

... Land’s case for democratic dysfunction is simply stated. Democracy is structurally incapable of rational leadership due to perverse incentive structures. It is trapped in short-termism by the electoral cycle, hard decisions become political suicide, and social catastrophe is acceptable as long as it can be blamed on the other team. Moreover, inter-party competition to “buy votes” leads to a ratchet effect of ever-greater state intervention in the economy—and even if this is periodically reversed, in the long-run it only moves in one direction. ... Rather than accept creeping democratic socialism (which leads to “zombie apocalypse”), Land would prefer to simply abolish democracy and appoint a national CEO. This capitalist Leviathan would be, at a bare minimum, capable of rational long-term planning and aligning individual incentive structures with social well-being (CEO-as-Tiger-Mom). Individuals would have no say in government, but would be generally left alone, and free to leave. This right of “exit” is, for Land, the only meaningful right, and it’s opposed to democratic “voice,” where everyone gets a say, but is bound by the decisions of the majority—the fear being that the majority will decide to self-immolate.

Anti-democratic sentiment is uncommon in the West, so Land’s conclusions appear as shocking, deliberate provocations, which they partly are. ... Pointing to Singapore, Hong Kong, and Shanghai, it argues that economically and socially effective government legitimizes itself, with no need for elections. And this view isn’t limited to the internet right. ...

This brand of authoritarian capitalism has a certain fascist sheen, but in truth it’s closer to a rigidly formalized capitalist technocracy. There’s no mass mobilization, totalitarian social reorganization, or cult of violence here; governing will be done by the governors, and popular sovereignty replaced by the market Mandate of Heaven. There is a strange sort of disillusioned cultural conservatism here as well, albeit one absolutely stripped of moralism. In fact, what’s genuinely creepy about it is the near-sociopathic lack of emotional attachment; it’s a sort of pure incentive-based functionalism, as if from the perspective of a computer or alien. If a person doesn’t produce quantifiable value, they are, objectively, not valuable. Everything else is sentimentality.

...

Capitalism, in this view, is less something we do than something done to us. Contra business-class bromides about the market as the site of creative expression, for Land, as for Marx, capitalism is a fundamentally alien institution in which “the means of production socially impose themselves as an effective imperative.” This means simply that the competitive dynamics of capitalism drive technical progress as an iron law. If one capitalist doesn’t want to build smarter, better machines, he’ll be out-competed by one who does. If Apple doesn’t make you an asshole, Google will. If America doesn’t breed genetically modified super-babies, China will. The market doesn’t run on “greed,” or any intentionality at all. Its beauty—or horror—is its impersonality. Either you adapt, or you die.

Accelerating technological growth, then, is written into capitalism’s DNA. Smart machines make us smarter allowing us to make smarter machines, in a positive feedback loop that quickly begins to approach infinity, better known in this context as “singularity.” ...
Somehow I ended up on this "map of neoreaction" -- without my consent, of course. Who are all these people? ;-)

Sunday, October 11, 2015

Additivity in yeast quantitative traits



A new paper from the Kruglyak lab at UCLA shows yet again (this time in yeast) that population variation in quantitative traits tends to be dominated by additive effects. There are deep evolutionary reasons for this to be the case -- see excerpt below (at bottom of this post). For other examples, including humans, mice, chickens, cows, plants, see links here.
Genetic interactions contribute less than additive effects to quantitative trait variation in yeast (http://dx.doi.org/10.1101/019513)

Genetic mapping studies of quantitative traits typically focus on detecting loci that contribute additively to trait variation. Genetic interactions are often proposed as a contributing factor to trait variation, but the relative contribution of interactions to trait variation is a subject of debate. Here, we use a very large cross between two yeast strains to accurately estimate the fraction of phenotypic variance due to pairwise QTLQTL interactions for 20 quantitative traits. We find that this fraction is 9% on average, substantially less than the contribution of additive QTL (43%). Statistically significant QTL-QTL pairs typically have small individual effect sizes, but collectively explain 40% of the pairwise interaction variance. We show that pairwise interaction variance is largely explained by pairs of loci at least one of which has a significant additive effect. These results refine our understanding of the genetic architecture of quantitative traits and help guide future mapping studies.


Genetic interactions arise when the joint effect of alleles at two or more loci on a phenotype departs from simply adding up the effects of the alleles at each locus. Many examples of such interactions are known, but the relative contribution of interactions to trait variation is a subject of debate1–5. We previously generated a panel of 1,008 recombinant offspring (“segregants”) from a cross between two strains of yeast: a widely used laboratory strain (BY) and an isolate from a vineyard (RM)6. Using this panel, we estimated the contribution of additive genetic factors to phenotypic variation (narrow-sense or additive heritability) for 46 traits and resolved nearly all of this contribution (on average 87%) to specific genome-wide-significant quantitative trait loci (QTL). ...

We detected nearly 800 significant additive QTL. We were able to refine the location of the QTL explaining at least 1% of trait variance to approximately 10 kb, and we resolved 31 QTL to single genes. We also detected over 200 significant QTL-QTL interactions; in most cases, one or both of the loci also had significant additive effects. For most traits studied, we detected one or a few additive QTL of large effect, plus many QTL and QTL-QTL interactions of small effect. We find that the contribution of QTL-QTL interactions to phenotypic variance is typically less than a quarter of the contribution of additive effects. These results provide a picture of the genetic contributions to quantitative traits at an unprecedented resolution.

... One can test for interactions either between all pairs of markers (full scan), or only between pairs where one marker corresponds to a significant additive QTL (marginal scan). In principle, the former can detect a wider range of interactions, but the latter can have higher power due to a reduced search space. Here, the two approaches yielded similar results, detecting 205 and 266 QTL-QTL interactions, respectively, at an FDR of 10%, with 172 interactions detected by both approaches. In the full scan, 153 of the QTL-QTL interactions correspond to cases where both interacting loci are also significant additive QTL, 36 correspond to cases where one of the loci is a significant additive QTL, and only 16 correspond to cases where neither locus is a significant additive QTL.
For related discussion of nonlinear genetic models, see here:
It is a common belief in genomics that nonlinear interactions (epistasis) in complex traits make the task of reconstructing genetic models extremely difficult, if not impossible. In fact, it is often suggested that overcoming nonlinearity will require much larger data sets and significantly more computing power. Our results show that in broad classes of plausibly realistic models, this is not the case.
Determination of Nonlinear Genetic Architecture using Compressed Sensing (arXiv:1408.6583)
Chiu Man Ho, Stephen D.H. Hsu
Subjects: Genomics (q-bio.GN); Applications (stat.AP)

We introduce a statistical method that can reconstruct nonlinear genetic models (i.e., including epistasis, or gene-gene interactions) from phenotype-genotype (GWAS) data. The computational and data resource requirements are similar to those necessary for reconstruction of linear genetic models (or identification of gene-trait associations), assuming a condition of generalized sparsity, which limits the total number of gene-gene interactions. An example of a sparse nonlinear model is one in which a typical locus interacts with several or even many others, but only a small subset of all possible interactions exist. It seems plausible that most genetic architectures fall in this category. Our method uses a generalization of compressed sensing (L1-penalized regression) applied to nonlinear functions of the sensing matrix. We give theoretical arguments suggesting that the method is nearly optimal in performance, and demonstrate its effectiveness on broad classes of nonlinear genetic models using both real and simulated human genomes.
I've discussed additivity many times previously, so I'll just quote below from Additivity and complex traits in mice:
You may have noticed that I am gradually collecting copious evidence for (approximate) additivity. Far too many scientists and quasi-scientists are infected by the epistasis or epigenetics meme, which is appealing to those who "revel in complexity" and would like to believe that biology is too complex to succumb to equations. ...

I sometimes explain things this way:

There is a deep evolutionary reason behind additivity: nonlinear mechanisms are fragile and often "break" due to DNA recombination in sexual reproduction. Effects which are only controlled by a single locus are more robustly passed on to offspring. ...

Many people confuse the following statements:

"The brain is complex and nonlinear and many genes interact in its construction and operation."

"Differences in brain performance between two individuals of the same species must be due to nonlinear (non-additive) effects of genes."

The first statement is true, but the second does not appear to be true across a range of species and quantitative traits. On the genetic architecture of intelligence and other quantitative traits (p.16):
... The preceding discussion is not intended to convey an overly simplistic view of genetics or systems biology. Complex nonlinear genetic systems certainly exist and are realized in every organism. However, quantitative differences between individuals within a species may be largely due to independent linear effects of specific genetic variants. As noted, linear effects are the most readily evolvable in response to selection, whereas nonlinear gadgets are more likely to be fragile to small changes. (Evolutionary adaptations requiring significant changes to nonlinear gadgets are improbable and therefore require exponentially more time than simple adjustment of frequencies of alleles of linear effect.) One might say that, to first approximation, Biology = linear combinations of nonlinear gadgets, and most of the variation between individuals is in the (linear) way gadgets are combined, rather than in the realization of different gadgets in different individuals.

Linear models work well in practice, allowing, for example, SNP-based prediction of quantitative traits (milk yield, fat and protein content, productive life, etc.) in dairy cattle. ...
See also Explain it to me like I'm five years old.

Wednesday, October 07, 2015

"1-bit" Compressed Sensing and Genetic Disease


This is an ASHG poster (click for larger version) describing work on predictive modeling of genetic disease using Compressed Sensing. Our previous work dealt with continuous traits (quantitative phenotypes). In the case of disease, one sometimes only has binary data to work with: individuals in the sample are either cases (have the condition) or controls (do not have the condition). Their underlying genetic susceptibility to the condition is not directly measurable. However, sophisticated techniques can use even this type of data to deduce the underlying genetic architecture. As in our earlier work, we demonstrate a "phase transition" in the performance of our algorithms as the amount of data available increases.

See related posts on quantitative traits: linear models 2, linear models, nonlinear method, and this talk: Genetic architecture and predictive modeling of quantitative traits.

Sunday, October 04, 2015

Understanding Genius: Helix Center roundtable video

You can watch the 2+ hour video of the roundtable on YouTube. I enjoyed the discussion but I don't like watching or listening to recordings of myself, so you'll have to tell me what you think of it ...

I was very flattered that several readers of the blog showed up for the event. Thanks to everyone who made it!



I'll be part of this roundtable discussion Saturday, Oct 3 in NYC. It's open to the general public and will be live streamed at the YouTube link above. I'm pleased to be on the panel with (among others) Dean Simonton, a UC Davis psychology professor and author of numerous books related to the theme of this meeting.
The Helix Center for Interdisciplinary Investigation
The Marianne & Nicholas Young Auditorium
247 East 82nd Street
New York, NY 10028
Understanding Genius

Schopenhauer defined genius in relation to the more conventional quality of talent. “Talent hits a target others miss. Genius hits a target no one sees.” Is originality indeed the sine qua non of genius? Is there, following Kant, a radical separation of the aesthetic genius from the brilliant scientific mind? What further distinctions might be made between different types of genius? If “The Child is father of the Man,” why don’t child prodigies always grow up to become adult geniuses?

Wednesday, September 30, 2015

Disruptive mutations and the genetic architecture of autism


New results on the genetic architecture of autism support Mike Wigler's Unified Theory. See earlier post De Novo Mutations and Autism. Recent increases in the incidence of autism could be mainly due to greater diagnostic awareness. However, the new result that women can be carriers of autism-linked variants without exhibiting the same kinds of symptoms as men might alter the usual analysis of the role of assortative mating. Perhaps women who are carriers are predisposed to marry nerdy (but mostly asymptomatic) males who also carry above average mutational load in autism genes?

I suspect many of the ~200 genes identified in this study will overlap with the ~80 SNPs recently found by SSGAC to be associated with cognitive ability. The principle of continuity suggests that in addition to ultra-rare variants with "devastating" effects, there are many moderately rare variants (also under negative, but weaker, selection due to smaller effect size) affecting the same pathways. These would contribute to variance in cognitive ability within the normal population. More discussion in section 3 of On the Genetic Architecture of Intelligence.
Neuroscience News: Quantitative study identifies 239 genes whose ‘vulnerability’ to devastating de novo mutation makes them priority research targets.

... devastating “ultra-rare” mutations of genes that they classify as “vulnerable” play a causal role in roughly half of all ASD cases. The vulnerable genes to which they refer harbor what they call an LGD, or likely gene-disruption. These LGD mutations can occur “spontaneously” between generations, and when that happens they are found in the affected child but not found in either parent.

Although LGDs can impair the function of key genes, and in this way have a deleterious impact on health, this is not always the case. The study, whose first author is the quantitative biologist Ivan Iossifov, a CSHL assistant professor and on faculty at the New York Genome Center, finds that “autism genes” – i.e., those that, when mutated, may contribute to an ASD diagnosis – tend to have fewer mutations than most genes in the human gene pool.

This seems paradoxical, but only on the surface. Iossifov explains that genes with devastating de novo LGD mutations, when they occur in a child and give rise to autism, usually don’t remain in the gene pool for more than one generation before they are, in evolutionary terms, purged. This is because those born with severe autism rarely reproduce.

The team’s data helps the research community prioritize which genes with LGDs are most likely to play a causal role in ASD. The team pares down a list of about 500 likely causal genes to slightly more than 200 best “candidate” autism genes.

The current study also sheds new light on the transmission to children of LGDs that are carried by parents who harbor them but whose health is nevertheless not severely affected. Such transmission events were observed and documented in the families used in the study, comprising the Simons Simplex Collection (SSC). When parents carry potentially devastating LGD mutations, these are more frequently found in the ASD-affected children than in their unaffected children, and most often come from the mother.

This result supports a theory first published in 2007 by senior author Michael Wigler, a CSHL professor, and Dr. Kenny Ye, a statistician at Albert Einstein College of Medicine. They predicted that unaffected mothers are “carriers” of devastating mutations that are preferentially transmitted to children affected with severe ASD. Females have an as yet unexplained factor that protects them from mutations which, when they occur in males, will be significantly more likely to cause ASD. It is well known that at least four times as many males as females have ASD.

Wigler’s 2007 “unified theory” of sporadic autism causation predicted precisely this effect. “Devastating de novo mutations in autism genes should be under strong negative selection pressure,” he explains. “And that is among the findings of the paper we’re publishing today. Our analysis also revealed that a surprising proportion of rare devastating mutations transmitted by parents occurs in genes expressed in the embryonic brain.” This finding tends to support theories suggesting that at least some of the gene mutations with the power to cause ASD occur in genes that are indispensable for normal brain development.
Here is the paper at PNAS:
Low load for disruptive mutations in autism genes and their biased transmission

We previously computed that genes with de novo (DN) likely gene-disruptive (LGD) mutations in children with autism spectrum disorders (ASD) have high vulnerability: disruptive mutations in many of these genes, the vulnerable autism genes, will have a high likelihood of resulting in ASD. Because individuals with ASD have lower fecundity, such mutations in autism genes would be under strong negative selection pressure. An immediate prediction is that these genes will have a lower LGD load than typical genes in the human gene pool. We confirm this hypothesis in an explicit test by measuring the load of disruptive mutations in whole-exome sequence databases from two cohorts. We use information about mutational load to show that lower and higher intelligence quotients (IQ) affected individuals can be distinguished by the mutational load in their respective gene targets, as well as to help prioritize gene targets by their likelihood of being autism genes. Moreover, we demonstrate that transmission of rare disruptions in genes with a lower LGD load occurs more often to affected offspring; we show transmission originates most often from the mother, and transmission of such variants is seen more often in offspring with lower IQ. A surprising proportion of transmission of these rare events comes from genes expressed in the embryonic brain that show sharply reduced expression shortly after birth.

Saturday, September 26, 2015

Expert Prediction: hard and soft

Jason Zweig writes about Philip Tetlock's Good Judgement Project below. See also Expert Predictions, Perils of Prediction, and this podcast talk by Tetlock.

A quick summary: good amateurs (i.e., smart people who think probabilistically and are well read) typically perform as well as or better than area experts (e.g., PhDs in Social Science, History, Government; MBAs) when it comes to predicting real world outcomes. The marginal returns (in predictive power) to special "expertise" in soft subjects are small. (Most of the returns are in the form of credentialing or signaling ;-)
WSJ: ... I think Philip Tetlock’s “Superforecasting: The Art and Science of Prediction,” co-written with the journalist Dan Gardner, is the most important book on decision making since Daniel Kahneman’s “Thinking, Fast and Slow.” (I helped write and edit the Kahneman book but receive no royalties from it.) Prof. Kahneman agrees. “It’s a manual to systematic thinking in the real world,” he told me. “This book shows that under the right conditions regular people are capable of improving their judgment enough to beat the professionals at their own game.”

The book is so powerful because Prof. Tetlock, a psychologist and professor of management at the University of Pennsylvania’s Wharton School, has a remarkable trove of data. He has just concluded the first stage of what he calls the Good Judgment Project, which pitted some 20,000 amateur forecasters against some of the most knowledgeable experts in the world.

The amateurs won — hands down. Their forecasts were more accurate more often, and the confidence they had in their forecasts — as measured by the odds they set on being right — was more accurately tuned.

The top 2%, whom Prof. Tetlock dubs “superforecasters,” have above-average — but rarely genius-level — intelligence. Many are mathematicians, scientists or software engineers; but among the others are a pharmacist, a Pilates instructor, a caseworker for the Pennsylvania state welfare department and a Canadian underwater-hockey coach.

The forecasters competed online against four other teams and against government intelligence experts to answer nearly 500 questions over the course of four years: Will the president of Tunisia go into exile in the next month? Will the gold price exceed $1,850 on Sept. 30, 2011? Will OPEC agree to cut its oil output at or before its November 2014 meeting?

It turned out that, after rigorous statistical controls, the elite amateurs were on average about 30% more accurate than the experts with access to classified information. What’s more, the full pool of amateurs also outperformed the experts. ...
In technical subjects, such as chemistry or physics or mathematics, experts vastly outperform lay people even on questions related to everyday natural phenomena (let alone specialized topics). See, e.g., examples in Thinking Physics or Physics for Future Presidents. Because these fields have access to deep and challenging questions with demonstrably correct answers, the ability to answer these questions (a combination of cognitive ability and knowledge) is an obviously real and useful construct. See earlier post The Differences are Enormous:
Luis Alvarez laid it out bluntly:
The world of mathematics and theoretical physics is hierarchical. That was my first exposure to it. There's a limit beyond which one cannot progress. The differences between the limiting abilities of those on successively higher steps of the pyramid are enormous.
... People who work in "soft" fields (even in science) don't seem to understand this stark reality. I believe it is because their fields do not have ready access to right and wrong answers to deep questions. When those are available, huge differences in cognitive power are undeniable, as is the utility of this power.
Thought experiment for physicists: imagine a professor throwing copies of Jackson's Classical Electrodynamics at a group of students with the order, "Work out the last problem in each chapter and hand in your solutions to me on Monday!" I suspect that this exercise produces a highly useful rank ordering within the group, with huge differences in number of correct solutions.

Friday, September 25, 2015

Largest repositories of genomic data

This list of the largest repositories of genetic data appeared in the 25 September 2015 issue of Science. Note that the quality and extent of phenotyping varies significantly.
23andME

SIZE: >1 million GENETIC DATA: SNPs

This popular personal genomics company now hopes to apply its data to drug discovery (see main story, p. 1472).

ANCESTRY.COM

SIZE: >1 million GENETIC DATA: SNPs

This genealogy firm now has a collaboration with the Google-funded biotech Calico to look for longevity genes.

HUMAN LONGEVITY, INC.

SIZE: 1 million planned GENETIC DATA: whole genomes

Founded by genome pioneer Craig Venter, this company plans to sequence 100,000 people a year to look for aging-related genes.

100K WELLNESS PROJECT

SIZE: 107 (100,000 planned) GENETIC DATA: whole genomes

Led by another sequencing leader, Leroy Hood, this project is taking a systems approach to genetics and health.

MILLION VETERAN PROGRAM

SIZE: 390,000 (1 million planned) GENETIC DATA: SNPs, exomes, whole genomes

This U.S. Department of Defense–funded effort is probing the genetics of kidney and heart disease and substance abuse.

U.S. NATIONAL RESEARCH COHORT

SIZE: 1 million planned GENETIC DATA: to be determined

Part of President Obama's Precision Medicine Initiative, this project will use genetics to tailor health care to individuals.

UK BIOBANK

SIZE: 500,000 GENETIC DATA: SNPs

Study of middle-aged British is probing links between lifestyle, genes, and common diseases.

100,000 GENOMES PROJECT

SIZE: 5500 (75,000 normal + 25,000 tumor genes planned) GENETIC DATA: whole genomes

This U.K.-funded project focusing on cancer and rare diseases aims to integrate whole genomes into clinical care.

deCODE GENETICS

SIZE: 140,000 GENETIC DATA: SNPs, whole genomes

Now owned by Amgen, this pioneering Icelandic company hunted for disease-related genes in the island country.

KAISER-PERMANENTE BIOBANK

SIZE: 200,000 (500,000 planned) GENETIC DATA: SNPs

This health maintenance organization has published on telomeres and disease risks.

GEISINGER MYCODE

SIZE: 60,000 (250,000 planned) GENETIC DATA: exomes

Geisinger, a Pennsylvania health care provider, works with Regeneron Pharmaceuticals to study DNA links to disease.

VANDERBILT'S BIOVU

SIZE: 192,000 GENETIC DATA: SNPs

Focused on genes that affect common diseases and drug response, BioVU data have been permanently deidentified.

BIOBANK JAPAN

SIZE: 200,000 GENETIC DATA: SNPs

This study collected DNA from volunteers between 2003 and 2007 and is now looking at genetics of common diseases.

CHINA KADOORIE BIOBANK

SIZE: 510,000 GENETIC DATA: SNPs

This study is probing links between genetics, lifestyle and common diseases.

EAST LONDON GENES & HEALTH

SIZE: 100,000 planned GENETIC DATA: exomes

One aim is to find healthy “human knockouts”—people who lack a specific gene—in a population in which marrying relatives is common.

SAUDI HUMAN GENOME PROGRAM

SIZE: 100,000 planned GENETIC DATA: exomes

One aim of this national project is to find genes underlying rare inherited conditions.

CHILDREN'S HOSPITAL OF PHILADELPHIA

SIZE: 100,000 GENETIC DATA: SNPs, exomes

The world's largest pediatric biorepository connects DNA to the hospital's health records for studies of childhood diseases.

Wednesday, September 23, 2015

Understanding Genius: roundtable at the Helix Center, NYC



I'll be part of this roundtable discussion Saturday, Oct 3 in NYC. It's open to the general public and will be live streamed at the YouTube link above. I'm pleased to be on the panel with (among others) Dean Simonton, a UC Davis psychology professor and author of numerous books related to the theme of this meeting.
The Helix Center for Interdisciplinary Investigation
The Marianne & Nicholas Young Auditorium
247 East 82nd Street
New York, NY 10028
Understanding Genius

Schopenhauer defined genius in relation to the more conventional quality of talent. “Talent hits a target others miss. Genius hits a target no one sees.” Is originality indeed the sine qua non of genius? Is there, following Kant, a radical separation of the aesthetic genius from the brilliant scientific mind? What further distinctions might be made between different types of genius? If “The Child is father of the Man,” why don’t child prodigies always grow up to become adult geniuses?

Saturday, September 19, 2015

SNP hits on cognitive ability from 300k individuals

James Lee talk at ISIR 2015 (via James Thompson) reports on 74 hits at genome-wide statistical significance (p < 5E-8) using educational attainment as the phenotype. Most of these will also turn out to be hits on cognitive ability.

To quote James: "Shock and Awe" for those who doubt that cognitive ability is influenced by genetic variants. This is just the tip of the iceberg, though. I expect thousands more such variants to be discovered before we have accounted for all of the heritability.
74 GENOMIC SITES ASSOCIATED WITH EDUCATIONAL ATTAINMENT PROVIDE INSIGHT INTO THE BIOLOGY OF COGNITIVE PERFORMANCE 
James J Lee

University of Minnesota Twin Cities
Social Science Genetic Association Consortium

Genome-wide association studies (GWAS) have revealed much about the biological pathways responsible for phenotypic variation in many anthropometric traits and diseases. Such studies also have the potential to shed light on the developmental and mechanistic bases of behavioral traits.

Toward this end we have undertaken a GWAS of educational attainment (EA), an outcome that shows phenotypic and genetic correlations with cognitive performance, personality traits, and other psychological phenotypes. We performed a GWAS meta-analysis of ~293,000 individuals, applying a variety of methods to address quality control and potential confounding. We estimated the genetic correlations of several different traits with EA, in essence by determining whether single-nucleotide polymorphisms (SNPs) showing large statistical signals in a GWAS meta-analysis of one trait also tend to show such signals in a meta-analysis of another. We used a variety of bio-informatic tools to shed light on the biological mechanisms giving rise to variation in EA and the mediating traits affecting this outcome. We identified 74 independent SNPs associated with EA (p < 5E-8). The ability of the polygenic score to predict within-family differences suggests that very little of this signal is due to confounding. We found that both cognitive performance (0.82) and intracranial volume (0.39) show substantial genetic correlations with EA. Many of the biological pathways significantly enriched by our signals are active in early development, affecting the proliferation of neural progenitors, neuron migration, axonogenesis, dendrite growth, and synaptic communication. We nominate a number of individual genes of likely importance in the etiology of EA and mediating phenotypes such as cognitive performance.
For a hint at what to expect as more data become available, see Five Years of GWAS Discovery and On the genetic architecture of intelligence and other quantitative traits.


What was once science fiction will soon be reality.
Long ago I sketched out a science fiction story involving two Junior Fellows, one a bioengineer (a former physicist, building the next generation of sequencing machines) and the other a mathematician. The latter, an eccentric, was known for collecting signatures -- signed copies of papers and books authored by visiting geniuses (Nobelists, Fields Medalists, Turing Award winners) attending the Society's Monday dinners. He would present each luminary with an ornate (strangely sticky) fountain pen and a copy of the object to be signed. Little did anyone suspect the real purpose: collecting DNA samples to be turned over to his friend for sequencing! The mathematician is later found dead under strange circumstances. Perhaps he knew too much! ...

Friday, September 18, 2015

Bourdieu and the Economy of Symbolic Exchange


From Bobos in Paradise by David Brooks. This part of Bourdieu's oeuvre is, of course, required reading for all academics. By academics, I don't just mean humanists and social scientists. Even those in the hardest of sciences and technology would benefit from considering the political / symbolic economy of their field. Why, exactly, did most positions in top theoretical physics groups go to string theorists over a 20+ year period? See String Theory Quotes , String Theory and All That , Voting and Weighing.
The Economy of Symbolic Exchange

If a university were to offer a course of study on the marketplace of ideas, the writer who would be at the heart of the curriculum would be Pierre Bourdieu. Bourdieu is a French sociologist who is influential among his colleagues but almost entirely unread outside academia because of his atrocious prose style. Bourdieu’s aim is to develop an economy of symbolic exchanges, to delineate the rules and patterns of the cultural and intellectual marketplace. His basic thesis is that all intellectual and cultural players enter the marketplace with certain forms of capital. They may have academic capital (the right degrees), cultural capital (knowledge of a field or art form, a feel for the proper etiquette), linguistic capital (the ability to use language), political capital (the approved positions or affiliations), or symbolic capital (a famous fellowship or award). Intellectuals spend their careers trying to augment their capital and convert one form of capital into another. One intellectual might try to convert knowledge into a lucrative job; another might convert symbolic capital into invitations to exclusive conferences at tony locales; a third might seek to use linguistic ability to destroy the reputations of colleagues so as to become famous or at least controversial.

Ultimately, Bourdieu writes, intellectuals compete to gain a monopoly over the power to consecrate. Certain people and institutions at the top of each specialty have the power to confer prestige and honor on favored individuals, subjects, and styles of discourse. Those who hold this consecration of power influence taste, favor certain methodologies, and define the boundary of their discipline. To be chief consecrator is the intellectual’s dream.

Bourdieu doesn’t just look at the position an intellectual may hold at a given moment; he looks at the trajectory of a career, the successive attitudes, positions, and strategies a thinker adopts while rising or competing in the marketplace. A young intellectual may enter the world armed only with personal convictions. He or she will be confronted, Bourdieu says, with a diverse “field.” There will be daring radical magazines over on one side, staid establishment journals on another, dull but worthy publishing houses here, vanguard but underfunded houses over there. The intellectual will be confronted with rivalries between schools and between established figures. The complex relationships between these and other players in the field will be the tricky and shifting environment in which the intellectual will try to make his or her name. Bourdieu is quite rigorous about the interplay of these forces, drawing elaborate charts of the various fields of French intellectual life, indicating the power and prestige levels of each institution. He identifies which institutions have consecration power over which sections of the field.

Young intellectuals will have to know how to invest their capital to derive maximum “profit,” and they will have to devise strategies for ascent—whom to kiss up to and whom to criticize and climb over. Bourdieu’s books detail a dazzling array of strategies intellectuals use to get ahead. Bourdieu is not saying that the symbolic field can be understood strictly by economic principles. Often, he says, the “loser wins” rule applies. Those who most vociferously and publicly renounce material success win prestige and honor that can be converted into lucre. Nor does Bourdieu even claim that all of the strategies are self-conscious. He says that each intellectual possesses a “habitus,” or personality and disposition, that leads him or her in certain directions and toward certain fields. Moreover, the intellectual will be influenced, often unwillingly or unknowingly, by the gravitational pull of the rivalries and controversies of the field. Jobs will open up, grants will appear, furies will rage. In some ways the field dominates and the intellectuals are blown about within it.

Bourdieu hasn’t quite established himself as the Adam Smith of the symbolic economy. And it probably wouldn’t be very useful for a young intellectual to read him in hopes of picking up career tips, as a sort of Machiavellian Guide for Nobel Prize Wannabes. Rather, Bourdieu is most useful because he puts into prose some of the concepts that most other intellectuals have observed but have not systematized. Intellectual life is a mixture of careerism and altruism (like most other professions). Today the Bobo intellectual reconciles the quest for knowledge with the quest for the summer house.
See this comment, made 11 years ago!
Steve: We are living through a very bad time in particle theory. Without significant experimental guidance all we are left with is speculation and social dynamics driving the field. I hope things will get better when LHC data starts coming in - at least, most of the models currrently under consideration will be ruled out (although, of course, not string theory :-)

I will probably write a post at some point about how scientific fields which run through fallow experimental periods longer than 20 years (the length of a person's academic career) are in danger of falling into the traps which beset the humanities and social sciences. These were all discussed by Bourdieu long ago.

Wednesday, September 16, 2015

Gun Crazy

I grew up out in the country in Iowa. Our address was RR1 = "Rural Route 1" :-)  We had a creek, pond, dirtbike (motorcycle) track, and other fun stuff on our property. One of the things I enjoyed most was target shooting and plinking with my .22 -- I'd just walk out the back door and start shooting. With my scope zeroed in I could easily hit a squirrel at 50-100 yards from a standing position.

I haven't had a gun since I left for college, but now that I have kids and live in a gun friendly state, I thought I might get back into shooting a bit. There are two good ranges (one free) near my house, and my kids are at an age where they can learn gun safety and how to shoot. My wife disagrees, but to me knowing how to handle a gun is a basic life skill.

Gun technology has matured quite a bit since I was a kid. There is amazing stuff available at reasonable prices (red dot scopes!). With YouTube I got up to speed on the new gear really fast. You can't get a look at the internals of most guns while they are at the store, but online you can find complete disassembly videos.

This is a .22lr in an AR15 pattern (Smith and Wesson M&P 15-22, 5.5 lbs):






This is a 3 lb semi-auto .22lr (all polymer except the barrel and firing mechanism) available for just over $100 (Mossberg Blaze):




Ruger pistol, 25 shots at 5m--15m:

Thursday, September 10, 2015

Colleges ranked by Nobel, Fields, Turing and National Academies output

This Quartz article describes Jonathan Wai's research on the rate at which different universities produce alumni who make great contributions to science, technology, medicine, and mathematics. I think the most striking result is the range of outcomes: the top school outperforms good state flagships (R1 universities) by as much as a thousand times. In my opinion the main causative factor is simply filtering by cognitive ability and other personality traits like drive. Psychometrics works!
Quartz: Few individuals will be remembered in history for discovering a new law of nature, revolutionizing a new technology or captivating the world with their ideas. But perhaps these contributions say more about the impact of a university or college than test scores and future earnings. Which universities are most likely to produce individuals with lasting effect on our world?

The US News college rankings emphasize subjective reputation, student retention, selectivity, graduation rate, faculty and financial resources and alumni giving. Recently, other rankings have proliferated, including some based on objective long-term metrics such as individual earning potential. Yet, we know of no evaluations of colleges based on lasting contributions to society. Of course, such contributions are difficult to judge. In the analysis below, we focus primarily on STEM (science, technology, engineering and medicine/mathematics) contributions, which are arguably the least subjective to evaluate, and increasingly more valued in today’s workforce.

We examined six groups of exceptional achievers divided into two tiers, looking only at winners who attended college in the US. Our goal is to create a ranking among US colleges, but of course one could broaden the analysis if desired. The first level included all winners of the Nobel Prize (physics, chemistry, medicine, economics, literature, and peace), Fields Medal (mathematics) and the Turing Award (computer science). The second level included individuals elected to the National Academy of Sciences (NAS), National Academy of Engineering (NAE) or Institute of Medicine (IOM). The National Academies are representative of the top few thousand individuals in all of STEM.

We then traced each of these individuals back to their undergraduate days, creating two lists to examine whether the same or different schools rose to the top. We wanted to compare results across these two lists to see if findings in the first tier of achievement replicated in the second tier of achievement and to increase sample size to avoid the problem of statistical flukes.

Simply counting up the number of awards likely favors larger schools and alumni populations. We corrected for this by computing a per capita rate of production, dividing the number of winners from a given university by an estimate of the relative size of the alumni population. Specifically, we used the total number of graduates over the period 1966-2013 (an alternative method of estimating base population over 100 to 150 years led to very similar lists). This allowed us to objectively compare newer and smaller schools with older and larger schools.

In order to reduce statistical noise, we eliminated schools with only one or two winners of the Nobel, Fields or Turing prize. This resulted in only 25 schools remaining, which are shown below ...
The vast majority of schools have never produced a winner. #114 Ohio State and #115 Penn State, which have highly ranked research programs in many disciplines, have each produced one winner. Despite being top tier research universities, their per capita rate of production is over 400 times lower than that of the highest ranked school, Caltech. Of course, our ranking doesn’t capture all the ways individuals can impact the world. However, achievements in the Nobel categories, plus math and computer science, are of great importance and have helped shaped the modern world.

As a replication check with a larger sample, we move to the second category of achievement: National Academy of Science, Engineering or Medicine membership. The National Academies originated in an Act of Congress, signed by President Abraham Lincoln in 1863. Lifetime membership is conferred through a rigorous election process and is considered one of the highest honors a researcher can receive.
The results are strikingly similar across the two lists. If we had included schools with two winners in the Nobel/Fields/Turing list, Haverford, Oberlin, Rice, and Johns Hopkins would have been in the top 25 on both. For comparison, very good research universities such as #394 Arizona State, #396 Florida State and #411 University of Georgia are outperformed by the top school (Caltech) by 600 to 900 times. To give a sense of the full range: the per capita rate of production of top school to bottom school was about 449 to one for the Nobel/Fields/Turing list and 1788 to one for the National Academies list. These lists include only schools that produced at least one winner—the majority of colleges have produced zero.

What causes these drastically different odds ratios across a wide variety of leading schools? The top schools on our lists tend to be private, with significant financial resources. However, the top public university, UC Berkeley, is ranked highly on both lists: #13 on the Nobel/Fields/Turing and #31 on the National Academies. Perhaps surprisingly, many elite liberal arts colleges, even those not focused on STEM education, such as Swarthmore and Amherst, rose to the top. One could argue that the playing field here is fairly even: accomplished students at Ohio State, Penn State, Arizona State, Florida State and University of Georgia, which lag the leaders by factors of hundreds or almost a thousand, are likely to end up at the same highly ranked graduate programs as individuals who attended top schools on our list. It seems reasonable to conclude that large differences in concentration or density of highly able students are at least partly responsible for these differences in outcome.

Sports fans are unlikely to be surprised by our results. Among all college athletes only a few will win professional or world championships. Some collegiate programs undoubtedly produce champions at a rate far in excess of others. It would be uncontroversial to attribute this differential rate of production both to differences in ability of recruited athletes as well as the impact of coaching and preparation during college. Just as Harvard has a far higher percentage of students scoring 1600 on the SAT than most schools and provides advanced courses suited to those individuals, Alabama may have more freshman defensive ends who can run the forty yard dash in under 4.6 seconds, and the coaches who can prepare them for the NFL.

One intriguing result is the strong correlation (r ~ 0.5) between our ranking (over all universities) and the average SAT score of each student population, which suggests that cognitive ability, as measured by standardized tests, likely has something to do with great contributions later in life. By selecting heavily on measurable characteristics such as cognitive ability, an institution obtains a student body with a much higher likelihood of achievement. The identification of ability here is probably not primarily due to “holistic review” by admissions committees: Caltech is famously numbers-driven in its selection (it has the highest SAT/ACT scores), and outperforms the other top schools by a sizeable margin. While admission to one of the colleges on the lists above is no guarantee of important achievements later in life, the probability is much higher for these select matriculants.

We cannot say whether outstanding achievement should be attributed to the personal traits of the individual which unlocked the door to admission, the education and experiences obtained at the school, or benefits from alumni networks and reputation. These are questions worthy of continued investigation. Our findings identify schools that excel at producing impact, and our method introduces a new way of thinking about and evaluating what makes a college or university great. Perhaps college rankings should be less subjective and more focused on objective real world achievements of graduates.
For analogous results in college football, see here, here and here. Four and Five star recruits almost always end up at the powerhouse programs, and they are 100x to 1000x more likely to make it as pros than lightly recruited athletes who are nevertheless offered college scholarships.

Blog Archive

Labels