Category Archives: Science

Personality correlates of breadth vs. depth of research scholarship

Cross-post from my blog.

An interesting study has been published:

This is relevant to the study of polymathy, which of course involves making broader contributions to academic areas. The authors’ own abstract is actually not very good, so here is mine: They sent a personality questionnaire to two random sample of scientists (diabetes researchers). This field was chosen because it is large and old, thus providing researchers with lots of researchers to analyze. They out a couple of thousand of these questionnaires and received received 748 and 478 useful answers. They then hired some other company to provide researcher information regarding the researchers. To measure depth vs. breadth, they used the keywords associated with the articles. More different keywords, means more breadth.
They used this information as well as other measures and their personality measures in four regression models:

polymath_tableS3 polymath_tableS2 polymath_table2 polymath_table1

The difference between the sets of regression models is the use of total publications vs. centrality as a control. These variables also correlate .52, so it not surprisingly made little difference.

They also report the full correlation matrix:

polymath_tableS1

Of note in the results: Their measures of depth and breadth correlated strongly (.59), so this makes things more difficult. Preferably, one would want a single dimension to measure these along, not two highly positively correlated dimensions. The authors claimed to do this, but didn’t:

The two dependent variables, depth and breadth, were correlated positively (r = 0.59), and therefore we analyzed them separately (in each case, controlling for the other) rather than using the same predictive model. Discriminant validity is sup- ported by roughly 65% of variance unshared. At the same time, sharing 35% variance renders the statistical tests somewhat conservative, making the many significant and distinguishing relationships particularly noteworthy.

Openness (5 factor model) correlated positively with both depth and breadth, perhaps just because these are themselves correlated. Thus it seems preferable to control for the other depth/breadth measure when modeling. In any case, O seems to be related to creative output in these data. Conscientiousness had negligible betas, perhaps because they control for centrality/total publications thru which the effect of C is likely to be mediated. They apparently did not use the other scales of the FFM inventory, or at least give the impression they didn’t. Maybe they did and didn’t report because near-zero results (publication bias).

Their four other personality variables correlated in the expected directions. Exploration and learning goal orientation with breadth and performance goal orientation and competitiveness with depth.

Since the correlation matrix is published, one can do path and factor analysis on the data, but cannot run more regression models without case-level data. Perhaps the authors will supply it (they generally won’t).

The reporting on results in the main article is lacking. They report test-statistics without sample sizes and proper (d or r, or RR or something) effect sizes, a big no-no:

Study 1. In a simple test of scientists’ appraisals of deep, specialized studies vs. broader studies that span multiple domains, we created brief hypothetical descriptions of two studies (Fig. 1; see details in Supporting Information). Counterbalancing the sequence of the descriptions in a sample separate from our primary (Study 2) sample, we found that these scientists considered the broader study to be riskier (means = 4.61 vs. 3.15; t = 12.94, P < 0.001), a less significant opportunity (5.17 vs. 5.83; t = 6.13, P < 0.001), and of lower potential importance (5.35 vs. 5.72; t = 3.47, P < 0.001). They reported being less likely to pursue the broader project (on a 100% probability scale, 59.9 vs. 73.5; t = 14.45, P < 0.001). Forced to choose, 64% chose the deep project and 33% (t = 30.12, P < 0.001) chose the broad project (3% were missing). These results support the assumptions underlying our Study 2 predictions, that the perceived risk/return trade-off generally favors choosing depth over breadth.

Since they don’t mean the SDs, one cannot calculate r or d from their data I think. Unless one can get it from the t-values (not sure). One can of course calculate odds ratios using their mean values, but I’m not sure this would be a meaningful statistic (not a ratio scale, maybe not even an interval scale).

Their model fitting comparison is pretty bad, since they only tried their preferred model vs. an implausible straw man model:

Study 2. We conducted confirmatory factor analysis to assess the adequacy of the measurement component of the proposed model and to evaluate the model relative to alternative models (21). A six-factor model, in which items measuring our six self-reported dispositional variables loaded on separate correlated factors, had a significant χ 2 test [χ 2 (175) = 615.09, P < 0.001], and exhibited good fit [comparative fit index (CFI) = 0.90, root mean square error of approximation (RMSEA) = 0.07]. Moreover, the six-factor model’s standardized loadings were strong and significant, ranging from 0.50 to 0.93 (all P < 0.01). We compared the hypothesized measurement model to a one-factor model (22) in which all of the items loaded on a common factor [χ 2 (202) = 1315.5, P < 0.001, CFI = 0.72, RMSEA = 0.17] and found that the hypothesized six-factor model fit the data better than the one-factor model [χ 2 (27) = 700.41, P < 0.001].

Not quite sure how this was done. Too little information given. Did they use item-level modeling or? It sort of sounds like it. Since the data isn’t given, one cannot confirm this, or do other item-level modeling. For instance, if I were to analyze it, I would probably have the items of their competitiveness and performance scales load on a common latent factor (r=.39), as well as the items from the exploration and learning scales on their latent factor, maybe try with openness too (r’s .23, .30, .17).

Of other notes in their correlations: Openness is correlated with being in academia vs. non-academia (r=.22), so there is some selection going on not just with general intelligence there.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Scientific genius is associated with abilities in the fine arts

A study finds scientific genius (measured in various ways) is associated with abilities in the fine arts. The abstract of the study is:

Various investigators have proposed that “scientific geniuses” are polymaths. To test this hypothesis, auto­ biographies, biographies, and obituary notices of Nobel Prize winners in the sciences, members of the Royal Society, and the U.S. National Academy of Sciences were read and adult arts and crafts avocations tabulated. Data were compared with a 1936 avocation survey of Sigma Xi members and a 1982 survey of arts avocations among the U.S. public. Nobel laureates were significantly more likely to engage in arts and crafts avocations than Royal Society and National Academy of Sciences members, who were in turn significantly more likely than Sigma Xi members and the U.S. public. Scientists and their biographers often commented on the utility of their avocations as stimuli for their science. The utility of arts and crafts training for scientists may have important public policy and educational implications in light of the marginalization of these subjects in most curricula.

Full citation: Root-Bernstein, Robert, et al. “Arts foster scientific success: Avocations of Nobel, National Academy, Royal Society, and Sigma Xi members.” Journal of the Psychology of Science and Technology 1 (2008): 51-63. Non-gated download link.

This should have the interest of followers of this blog. Here’s some of the data:

figure_1

As can be seen, Nobel winners were much, much more likely to have artistic interests than members of the general public. By all means, read the paper yourself. It is only 13 pages. The authors have spent some time collecting anecdotes from various scientific geniuses that illustrate their love for the arts and science.

facebooktwittergoogle_plusredditpinterestlinkedinmail

1905: Annus Mirabilis – Brownian Motion

This is the second in a series of posts that will cover the outcome of the 4 fundamental papers published by Albert Einstein in 1905, the so-called “Annus Mirabilis”, or miracle year. This article was originally published at the sent2null blog and is reposted here courtesy of David Saintloth. The remaining 2 posts in the series are to follow.

 

In the second of the series of posts covering the ground breaking advances made by Albert Einstein we will discuss the incredible phenomena of Brownian motion. It may seem that this phenomena didn’t have the revolutionary muscle behind it that the other discoveries of Einstein’s great year but that is an illusion. We need to understand what was known about the world of the subatomic at this time.
Basically nothing.

There was much conjecture about what the world was possibly made and amazingly through the work of the al-chemists humans gained amazing blind facility with creating new molecules from their very scant understanding of how elements could be mixed in measure to induce various reactions but little was really known about what exactly matter was made up of.

Of course going back to the Greeks the idea of what it was made up of was given by smart people like Democritus who stated:

“The more any indivisible exceeds, the heavier it is.”

Well that settles the matter doesn’t it? Well not really, the conception of atoms that the ancients had was a bit different from that put forward by modern thinkers, but the general idea of spherical elements interacting in large amounts to constitute the macroscopic materials of which they were made is clear. The problem was is that no one was able to *prove* that this was so, even Newton used the conception only so far as it was useful to allow him to create measures for describing his idea of optics but that didn’t rely on any real understanding of the light being made up of particles (or as he called them “corpuscles”).

A bit later the Roman Lucretius stated this wrote this incredibly prescient statement:“Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways… their dancing is an actual indication of underlying movements of matter that are hidden from our sight… It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible.”


However, this is incorrect as dust particles have their chaotic motions controlled by wind currents than by the bombardments of individual atoms.

Nearly 2000 years later,  JJ Thompson added some solidity to the idea of atoms by harnessing the electrons which we know today are part of atoms and are the constituent particle of electrical current flows. He won the Nobel prize in 1906 for his work in describing the ratios by which current flows could be deflected using electric fields.

Thomson believed that the corpuscles emerged from the atoms of the trace gas inside his cathode ray tubes. He thus concluded that atoms were divisible, and that the corpuscles were their building blocks. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge; this was the “plum pudding” model—the electrons were embedded in the positive charge like plums in a plum pudding (although in Thomson’s model they were not stationary, but orbiting rapidly). ”

However, note he didn’t win that prize until after Einstein’s miracle year, it’s difficult to suppose why but in many ways Brownian motion wasn’t just about determining that atoms existed. It was pretty much agreed that they did, but formalizing how their masses varied and how that could be inferred from group dynamics was wide open. Thus the real power revealed by Einstein’s theory is summarized by this passage in the Brownian motion article at wikipedia:


But Einstein’s predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein’s theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory’s account of the second law of thermodynamics as being an essentially statistical law. ”

So, the power of Einstein’s theory was that it used thermodynamic means to infer atomic presence and attributes such as mass. So what ?

Thermodynamic analysis allowed Einstein’s theory to refine the methods by which chemistry could measure the size of molecules of various types.


 This result enables the experimental determination of Avogadro’s number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. ”

This is a *huge* result as it allowed molecular chemistry to proceed forward at a pace that it had not yet achieved prior to application of these methods to determine precise measures of necessary components and percentages to creating new molecules. It would be at least another 20 years before the full truth of atoms and their chemistry important subatomic constituents would be fully revealed but explaining Brownian motion took Chemistry mostly from a guess work Science to one of precision. The 20’s, 30’s and 40’s stand testament to the revolution that was enabled by understanding at a molecular level what atoms were doing and how they could be combined.

Companies like DuPont, Bayer, BASF, Dow Chemical should ring a bell as much of their innovations in the 30’s and 40’s that fueled the war efforts on both sides of the planet were induced by innovations in artificial molecules that were made possible by the more refined chemical fidelity enabled by fully understanding the interactions of atoms. From Nylon to Polyurethane to Polyester exist because of this innovation, considering that you are likely wearing clothes that contain one of these substances as you read this it stands testament to how extensive Einstein’s theory was.

Links:
http://en.wikipedia.org/wiki/Brownian_motion


http://www.caimateriali.org/index.php?id=32


http://en.wikipedia.org/wiki/On_the_Nature_of_Things


http://en.wikipedia.org/wiki/Democritus


http://en.wikipedia.org/wiki/JJ_thomson


http://en.wikipedia.org/wiki/Avogrado%27s_number


http://en.wikipedia.org/wiki/Polyurethane


http://en.wikipedia.org/wiki/Nylon


facebooktwittergoogle_plusredditpinterestlinkedinmail

Ada 2012 Tutorial #2

Ada 2012 Tutorial
Parsing (and Streams)

Ada Lovelace, the namesake of the Ada programming language, considered the world’s first computer programmer

We left off the previous tutorial at parsing input from a user or a file, so we’re going to address that today. First, however, I need to introduce Streams.

Streams are a method to read or write any object to any medium, and thus they are doubly generalized. This also means that you are bounded by the most restrictive set of operations common to all mediums. As an example, you cannot provide position control in a general manner because not all transmission modes are random-access (like receiving a radio-signal), and not all streams are bi-directional (like a light-sensor).

In the informal parlance we’ve adopted we can just sat that all types have stream attributes, accessed with ‘Read and ‘Write, because all elementary types have them and the compiler knows how to compose compound types from elementary types, so you don’t normally have to keep track of elements in a compound type. (You do have to keep track of them if you’re writing both read and write that must be be functionally, rather than perfectly, inverse-operations; this is not a deficiency, but because you are implementing a protocol.)

So let’s see how to do it.
Continue reading

facebooktwittergoogle_plusredditpinterestlinkedinmail

Ada 2012 Tutorial #1

Ada 2012 Tutorial

Ada Lovelace, the namesake of the Ada programming language, considered the world’s first computer programmer

    Welcome to the tutorial! I will be making some assumptions which are fairly safe: first, that you are unfamiliar with the Ada language; second, you have at least some interest in discovering what it is about; third, that you have some programming experience; and last, that you have an Ada Compiler. (There’s a free one available from AdaCore here, and the GCC has one as well.)
    Ada is probably different than what programming languages you are likely to be familiar with, this is a result of Ada’s design goals — two of which are safety and maintainability. The first means that Ada tries to do a lot of checking up-front, in compilation if possible, which reduces the time spent debugging at the cost of the compiler rejecting erroneous source. That can be frustrating at times, but it is better than having to spend three days tracking down a bug. This leads us to the second difference, Ada was designed to be highly maintainable, even across large teams, which is evident in its package system.
    To introduce the language I will use a small and simple (read as ‘toy’) LISP-like interpreter. To begin with, we need to realize that LISP runs on a loop of read-input, evaluate, and print.

Continue reading

facebooktwittergoogle_plusredditpinterestlinkedinmail

1905: Annus Mirabilis – Photoelectric effect

This is the first in a series of posts that will cover the outcome of the 4 fundamental papers published by Albert Einstein in 1905, the so-called “Annus Mirabilis”, or miracle year. This article was originally published at the sent2null blog and is reposted here courtesy of David Saintloth. The remaining 3 posts in the series are to follow.

 

1905 was a great year for physics – in this year a 24 year old patent examiner in Bern, Switzerland published 4 fundamental physics papers in 4 disparate areas of the field. The topics included special relativity, the relationship between energy and matter, Brownian motion, and the subject of this post, the photoelectric effect.

Next to his paper on Brownian motion, Einstein’s paper on the photoelectric effect was probably the most practical: it provided an answer to a long-standing problem in electromagnetic theory at the time that had stood as an embarrassment to particle physics. This embarrassment was a legacy of the work of James Clerk Maxwell and his fundamental equations of electromagnetism: by using a continuous wave analog to describe the energy of propagating fields, Maxwell was able to astonishingly explain the riddle that was the relationship between electricity and magnetism in clear mathematical terms. He was also able to show how light itself must be an electromagnetic wave, by showing that all such waves are limited by the speed of light (c), roughly 186,000 miles per second.

File:Electromagneticwave3D.gif
Continue reading

facebooktwittergoogle_plusredditpinterestlinkedinmail