Personality correlates of breadth vs. depth of research scholarship

Cross-post from my blog.

An interesting study has been published:

This is relevant to the study of polymathy, which of course involves making broader contributions to academic areas. The authors’ own abstract is actually not very good, so here is mine: They sent a personality questionnaire to two random sample of scientists (diabetes researchers). This field was chosen because it is large and old, thus providing researchers with lots of researchers to analyze. They out a couple of thousand of these questionnaires and received received 748 and 478 useful answers. They then hired some other company to provide researcher information regarding the researchers. To measure depth vs. breadth, they used the keywords associated with the articles. More different keywords, means more breadth.
They used this information as well as other measures and their personality measures in four regression models:

polymath_tableS3 polymath_tableS2 polymath_table2 polymath_table1

The difference between the sets of regression models is the use of total publications vs. centrality as a control. These variables also correlate .52, so it not surprisingly made little difference.

They also report the full correlation matrix:

polymath_tableS1

Of note in the results: Their measures of depth and breadth correlated strongly (.59), so this makes things more difficult. Preferably, one would want a single dimension to measure these along, not two highly positively correlated dimensions. The authors claimed to do this, but didn’t:

The two dependent variables, depth and breadth, were correlated positively (r = 0.59), and therefore we analyzed them separately (in each case, controlling for the other) rather than using the same predictive model. Discriminant validity is sup- ported by roughly 65% of variance unshared. At the same time, sharing 35% variance renders the statistical tests somewhat conservative, making the many significant and distinguishing relationships particularly noteworthy.

Openness (5 factor model) correlated positively with both depth and breadth, perhaps just because these are themselves correlated. Thus it seems preferable to control for the other depth/breadth measure when modeling. In any case, O seems to be related to creative output in these data. Conscientiousness had negligible betas, perhaps because they control for centrality/total publications thru which the effect of C is likely to be mediated. They apparently did not use the other scales of the FFM inventory, or at least give the impression they didn’t. Maybe they did and didn’t report because near-zero results (publication bias).

Their four other personality variables correlated in the expected directions. Exploration and learning goal orientation with breadth and performance goal orientation and competitiveness with depth.

Since the correlation matrix is published, one can do path and factor analysis on the data, but cannot run more regression models without case-level data. Perhaps the authors will supply it (they generally won’t).

The reporting on results in the main article is lacking. They report test-statistics without sample sizes and proper (d or r, or RR or something) effect sizes, a big no-no:

Study 1. In a simple test of scientists’ appraisals of deep, specialized studies vs. broader studies that span multiple domains, we created brief hypothetical descriptions of two studies (Fig. 1; see details in Supporting Information). Counterbalancing the sequence of the descriptions in a sample separate from our primary (Study 2) sample, we found that these scientists considered the broader study to be riskier (means = 4.61 vs. 3.15; t = 12.94, P < 0.001), a less significant opportunity (5.17 vs. 5.83; t = 6.13, P < 0.001), and of lower potential importance (5.35 vs. 5.72; t = 3.47, P < 0.001). They reported being less likely to pursue the broader project (on a 100% probability scale, 59.9 vs. 73.5; t = 14.45, P < 0.001). Forced to choose, 64% chose the deep project and 33% (t = 30.12, P < 0.001) chose the broad project (3% were missing). These results support the assumptions underlying our Study 2 predictions, that the perceived risk/return trade-off generally favors choosing depth over breadth.

Since they don’t mean the SDs, one cannot calculate r or d from their data I think. Unless one can get it from the t-values (not sure). One can of course calculate odds ratios using their mean values, but I’m not sure this would be a meaningful statistic (not a ratio scale, maybe not even an interval scale).

Their model fitting comparison is pretty bad, since they only tried their preferred model vs. an implausible straw man model:

Study 2. We conducted confirmatory factor analysis to assess the adequacy of the measurement component of the proposed model and to evaluate the model relative to alternative models (21). A six-factor model, in which items measuring our six self-reported dispositional variables loaded on separate correlated factors, had a significant χ 2 test [χ 2 (175) = 615.09, P < 0.001], and exhibited good fit [comparative fit index (CFI) = 0.90, root mean square error of approximation (RMSEA) = 0.07]. Moreover, the six-factor model’s standardized loadings were strong and significant, ranging from 0.50 to 0.93 (all P < 0.01). We compared the hypothesized measurement model to a one-factor model (22) in which all of the items loaded on a common factor [χ 2 (202) = 1315.5, P < 0.001, CFI = 0.72, RMSEA = 0.17] and found that the hypothesized six-factor model fit the data better than the one-factor model [χ 2 (27) = 700.41, P < 0.001].

Not quite sure how this was done. Too little information given. Did they use item-level modeling or? It sort of sounds like it. Since the data isn’t given, one cannot confirm this, or do other item-level modeling. For instance, if I were to analyze it, I would probably have the items of their competitiveness and performance scales load on a common latent factor (r=.39), as well as the items from the exploration and learning scales on their latent factor, maybe try with openness too (r’s .23, .30, .17).

Of other notes in their correlations: Openness is correlated with being in academia vs. non-academia (r=.22), so there is some selection going on not just with general intelligence there.

Facebooktwitterredditpinterestlinkedinmail

Polymathy and Reddit

Interdisciplinarity is growing. The internet has made it easy for people with diverse interests to research and work on many projects unrelated to their job or primary study. Reddit, the popular link-site / discussion board, is no exception. Reddit is a great tool for people with diverse interests because it makes it easier to follow developments in many different fields without belonging to the relevant social circles.

I’m familiar with at least two subreddits: /r/interdisciplinary and /r/polymath. The first is the largest and perhaps has the best content. Better, we have control over it, so we can attempt to link it up with the polymath project if there is some occasion to do so.

Facebooktwitterredditpinterestlinkedinmail

Scientific genius is associated with abilities in the fine arts

A study finds scientific genius (measured in various ways) is associated with abilities in the fine arts. The abstract of the study is:

Various investigators have proposed that “scientific geniuses” are polymaths. To test this hypothesis, auto­ biographies, biographies, and obituary notices of Nobel Prize winners in the sciences, members of the Royal Society, and the U.S. National Academy of Sciences were read and adult arts and crafts avocations tabulated. Data were compared with a 1936 avocation survey of Sigma Xi members and a 1982 survey of arts avocations among the U.S. public. Nobel laureates were significantly more likely to engage in arts and crafts avocations than Royal Society and National Academy of Sciences members, who were in turn significantly more likely than Sigma Xi members and the U.S. public. Scientists and their biographers often commented on the utility of their avocations as stimuli for their science. The utility of arts and crafts training for scientists may have important public policy and educational implications in light of the marginalization of these subjects in most curricula.

Full citation: Root-Bernstein, Robert, et al. “Arts foster scientific success: Avocations of Nobel, National Academy, Royal Society, and Sigma Xi members.” Journal of the Psychology of Science and Technology 1 (2008): 51-63. Non-gated download link.

This should have the interest of followers of this blog. Here’s some of the data:

figure_1

As can be seen, Nobel winners were much, much more likely to have artistic interests than members of the general public. By all means, read the paper yourself. It is only 13 pages. The authors have spent some time collecting anecdotes from various scientific geniuses that illustrate their love for the arts and science.

Facebooktwitterredditpinterestlinkedinmail

Things to Know When Purchasing Laboratory Chemicals Online

science

For schools and laboratories, purchasing chemical supplies online can be an efficient and easy way to maintain laboratory stocks. Prices can be compared, and online purchasing often represents significant savings, helping manage small research budgets. However, it is important to know how to safely order chemicals online, as there are some pitfalls that can occur with shipping and handling of chemicals that will not only erase any potential cost savings or convenience, but can create unforeseen regulatory headaches, environmental consequences, potential injury, and legal liability.

Your First Stop: Material Safety Data Sheet (MSDS)

When considering the potential hazards of chemicals being ordered, it is important to refer to a chemical database and/or the material safety data sheet (MSDS). Although there are general safety rules, each chemical has particular traits and properties that make it unique, requiring individualized handling procedures. In general, chemical hazards are grouped into four categories: flammability, corrosiveness, toxicity, and reactivity. These characteristics will dictate how each chemical is packaged, shipped, received, and stored. Compressed gas, for example, must be clearly labeled, have a valve protection mechanism in place, and be transported in an upright position. Corrosive chemicals must be packaged in special containers. Oxidizing chemicals may react with other substances to combust more easily, so they need to be shipped and stored separately from flammable materials. Laboratories must have the capability to handle, manage and store chemicals when they are received, so as to prevent deterioration and minimize worker exposure to hazardous materials. Special storage may be required, or the timing of delivery may need to be closely monitored.

Bulk Purchase: Not Always a Bargain

One important consideration in acquiring a chemical is to be aware of its life cycle. The reactive compounds in certain chemicals may decompose before they can be used. Laboratories may be stuck with large quantities of a chemical that was obtained cheaply but becomes a liability when extra amounts are not needed. For example, one research project received a donated 55-gallon container of an experimental toluene. Only a small quantity was used, but the remainder could not be disposed of through commercial incinerators in bulk form. Thousands of dollars in disposal costs were incurred, turning the donation into a liability. It is good practice, therefore, to only order sufficient quantities required for the short term, rather than making a bulk purchase deal for larger quantities that may end up being unusable. Waste management in general must be considered, particularly when using unstable materials with a short shelf life.

Look for Applicable Regulations

Specific occupational health and safety regulations apply to many chemicals ordered online, and shipping and receiving of explosive, reactive, inflammatory and highly toxic chemicals must be done in accordance with local regulatory procedures. Many schools and laboratories have additional specific guidelines that must be followed. Special consideration should also be given to nanotechnology and nanomaterials. Research is ongoing in the emerging field of nanotechnology, but many of the hazards are not yet well-known. These substances react differently and may be more easily dispersed, so laboratories need to look for the most up-to-date research available when handling nanomaterials and follow applicable safety recommendations.

Because there are so many different types of chemical compounds, each with its own particular trait, referring to the MSDS label or a chemical database is one of the most important ways to ensure that your laboratory can safely order and manage the chemicals purchased online. It is also good practice for organizations and school laboratories to purchase online supplies from approved distributors who are reputable and knowledgeable in the handling and shipping of chemicals.

Alan Schuster is a recently retired high school chemistry teacher. He’s passionate about all things science and tech and loves blogging about both any time he gets the chance. Click here for more lab supply info.

 

References:

Capitol Scientific. http://www.nap.edu/openbook.php?record_id=4911&page=63

Prudent Practices in the Laboratory: Handling and Disposal of Chemicals. National Academies Press. http://www.nap.edu/openbook.php?record_id=4911&page=63

Science Buddies: http://www.sciencebuddies.org/science-fair-projects/project_supplies.shtml

Photo credit – skycaptaintwo of flickr

Facebooktwitterredditpinterestlinkedinmail

1905: Annus Mirabilis – Brownian Motion

This is the second in a series of posts that will cover the outcome of the 4 fundamental papers published by Albert Einstein in 1905, the so-called “Annus Mirabilis”, or miracle year. This article was originally published at the sent2null blog and is reposted here courtesy of David Saintloth. The remaining 2 posts in the series are to follow.

 

In the second of the series of posts covering the ground breaking advances made by Albert Einstein we will discuss the incredible phenomena of Brownian motion. It may seem that this phenomena didn’t have the revolutionary muscle behind it that the other discoveries of Einstein’s great year but that is an illusion. We need to understand what was known about the world of the subatomic at this time.
Basically nothing.

There was much conjecture about what the world was possibly made and amazingly through the work of the al-chemists humans gained amazing blind facility with creating new molecules from their very scant understanding of how elements could be mixed in measure to induce various reactions but little was really known about what exactly matter was made up of.

Of course going back to the Greeks the idea of what it was made up of was given by smart people like Democritus who stated:

“The more any indivisible exceeds, the heavier it is.”

Well that settles the matter doesn’t it? Well not really, the conception of atoms that the ancients had was a bit different from that put forward by modern thinkers, but the general idea of spherical elements interacting in large amounts to constitute the macroscopic materials of which they were made is clear. The problem was is that no one was able to *prove* that this was so, even Newton used the conception only so far as it was useful to allow him to create measures for describing his idea of optics but that didn’t rely on any real understanding of the light being made up of particles (or as he called them “corpuscles”).

A bit later the Roman Lucretius stated this wrote this incredibly prescient statement:“Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways… their dancing is an actual indication of underlying movements of matter that are hidden from our sight… It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible.”


However, this is incorrect as dust particles have their chaotic motions controlled by wind currents than by the bombardments of individual atoms.

Nearly 2000 years later,  JJ Thompson added some solidity to the idea of atoms by harnessing the electrons which we know today are part of atoms and are the constituent particle of electrical current flows. He won the Nobel prize in 1906 for his work in describing the ratios by which current flows could be deflected using electric fields.

Thomson believed that the corpuscles emerged from the atoms of the trace gas inside his cathode ray tubes. He thus concluded that atoms were divisible, and that the corpuscles were their building blocks. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge; this was the “plum pudding” model—the electrons were embedded in the positive charge like plums in a plum pudding (although in Thomson’s model they were not stationary, but orbiting rapidly). ”

However, note he didn’t win that prize until after Einstein’s miracle year, it’s difficult to suppose why but in many ways Brownian motion wasn’t just about determining that atoms existed. It was pretty much agreed that they did, but formalizing how their masses varied and how that could be inferred from group dynamics was wide open. Thus the real power revealed by Einstein’s theory is summarized by this passage in the Brownian motion article at wikipedia:


But Einstein’s predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein’s theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory’s account of the second law of thermodynamics as being an essentially statistical law. ”

So, the power of Einstein’s theory was that it used thermodynamic means to infer atomic presence and attributes such as mass. So what ?

Thermodynamic analysis allowed Einstein’s theory to refine the methods by which chemistry could measure the size of molecules of various types.


 This result enables the experimental determination of Avogadro’s number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. ”

This is a *huge* result as it allowed molecular chemistry to proceed forward at a pace that it had not yet achieved prior to application of these methods to determine precise measures of necessary components and percentages to creating new molecules. It would be at least another 20 years before the full truth of atoms and their chemistry important subatomic constituents would be fully revealed but explaining Brownian motion took Chemistry mostly from a guess work Science to one of precision. The 20’s, 30’s and 40’s stand testament to the revolution that was enabled by understanding at a molecular level what atoms were doing and how they could be combined.

Companies like DuPont, Bayer, BASF, Dow Chemical should ring a bell as much of their innovations in the 30’s and 40’s that fueled the war efforts on both sides of the planet were induced by innovations in artificial molecules that were made possible by the more refined chemical fidelity enabled by fully understanding the interactions of atoms. From Nylon to Polyurethane to Polyester exist because of this innovation, considering that you are likely wearing clothes that contain one of these substances as you read this it stands testament to how extensive Einstein’s theory was.

Links:
http://en.wikipedia.org/wiki/Brownian_motion


http://www.caimateriali.org/index.php?id=32


http://en.wikipedia.org/wiki/On_the_Nature_of_Things


http://en.wikipedia.org/wiki/Democritus


http://en.wikipedia.org/wiki/JJ_thomson


http://en.wikipedia.org/wiki/Avogrado%27s_number


http://en.wikipedia.org/wiki/Polyurethane


http://en.wikipedia.org/wiki/Nylon


Facebooktwitterredditpinterestlinkedinmail

Ada 2012 Tutorial #2

Ada 2012 Tutorial
Parsing (and Streams)

Ada Lovelace, the namesake of the Ada programming language, considered the world’s first computer programmer

We left off the previous tutorial at parsing input from a user or a file, so we’re going to address that today. First, however, I need to introduce Streams.

Streams are a method to read or write any object to any medium, and thus they are doubly generalized. This also means that you are bounded by the most restrictive set of operations common to all mediums. As an example, you cannot provide position control in a general manner because not all transmission modes are random-access (like receiving a radio-signal), and not all streams are bi-directional (like a light-sensor).

In the informal parlance we’ve adopted we can just sat that all types have stream attributes, accessed with ‘Read and ‘Write, because all elementary types have them and the compiler knows how to compose compound types from elementary types, so you don’t normally have to keep track of elements in a compound type. (You do have to keep track of them if you’re writing both read and write that must be be functionally, rather than perfectly, inverse-operations; this is not a deficiency, but because you are implementing a protocol.)

So let’s see how to do it.
Continue reading

Facebooktwitterredditpinterestlinkedinmail

Polymaths, freedom of information, and copyright – why we need copyright reform to more effectively increase the number of polymaths

Emil Kirkegaard, board member of Pirate Party Denmark

Introduction

Polymaths are people with a deep knowledge of multiple academic fields, and often various other interests as well, especially artistic, but sometimes even things like tropical exploring. Here I will focus on acquiring deep knowledge about academic fields, and why copyright reform is necessary to increase the number of polymaths in the world.

Learning method
What is the fastest way to learn about some field of study? There are a few methods of learning, 1) listening to speeches/lectures/podcasts and the like, 2) reading, 3) figuring out things oneself. The last method will not work well for any established academic field. It takes too long to work out all the things other people have already worked out, if indeed it can be done at all. Many experiments are not possible to do oneself. But it can work out well for a very recent field, or some field of study that isn’t in development at all, or some field where it is very easy to work it things oneself (gather and analyze data). Using data mining from the internet is a very easy way to find out many things without having to spend money. However, usually it is faster to find someone else who has already done it. But surely programming ability is a very valuable skill to have for polymaths.

For most fields, however, this leaves either listening in some form, or reading. I have recently discussed these at greater length, so I will just summarize my findings here. Reading is by far the best choice. Not only can one read faster than one can listen, the written language is also of greater complexity, which allows for more information acquired per word, hence per time. Listening to live lectures is probably the most common way of learning by listening. It is the standard at universities. Usually these lectures last too long for one to concentrate throughout them, and if one misses something, it is not possible to go back and get it repeated. It is also not possible to skip ahead if one has already learned whatever it is the that speaker is talking about. Listening to recorded (= non-live) speech is better in both of these ways, but it is still much slower than reading. Khan Academy is probably the best way to learn things like math and physics by listening to recorded, short-length lectures. It also has built-in tests with instant feedback, and a helpful community. See also the book Salman Khan recently wrote about it.

If one seriously wants to be a polymath, one will need to learn at speeds much, much faster than the speeds that people usually learn at, even very clever people (≥2 sd above the mean). This means lots, and lots of self-study, self-directed learning, mostly in the form of reading, but not limited to reading. There are probably some things that are faster and easier to learn by having them explained in speech. Having a knowledgeable tutor surely helps in helping one make a good choice of what to read. When I started studying philosophy, I spent hundreds of hours on internet discussions forums, and from them, I acquired quite a few friends who were knowledgeable about philosophy. They helped me choose good books/texts to read to increase the speed of my learning.

Finally, there is one more way of listening that I didn’t mention, it is the one-to-one tutor-based learning. It is very fast compared to regular classroom learning, usually resulting in a 2 standard deviation improvement. But this method is unavailable for almost everybody, and so not worth discussing. Individual tutoring can be written or verbal or some mix, so it doesn’t fall under precisely one category of those mentioned before.

How to start learning about a new field
Continue reading

Facebooktwitterredditpinterestlinkedinmail

Ada 2012 Tutorial #1

Ada 2012 Tutorial

Ada Lovelace, the namesake of the Ada programming language, considered the world’s first computer programmer

    Welcome to the tutorial! I will be making some assumptions which are fairly safe: first, that you are unfamiliar with the Ada language; second, you have at least some interest in discovering what it is about; third, that you have some programming experience; and last, that you have an Ada Compiler. (There’s a free one available from AdaCore here, and the GCC has one as well.)
    Ada is probably different than what programming languages you are likely to be familiar with, this is a result of Ada’s design goals — two of which are safety and maintainability. The first means that Ada tries to do a lot of checking up-front, in compilation if possible, which reduces the time spent debugging at the cost of the compiler rejecting erroneous source. That can be frustrating at times, but it is better than having to spend three days tracking down a bug. This leads us to the second difference, Ada was designed to be highly maintainable, even across large teams, which is evident in its package system.
    To introduce the language I will use a small and simple (read as ‘toy’) LISP-like interpreter. To begin with, we need to realize that LISP runs on a loop of read-input, evaluate, and print.

Continue reading

Facebooktwitterredditpinterestlinkedinmail

Meaning and Happiness

Which is a more important pursuit in life: meaning or happiness?

crepuscular_rays_jerusalem3

Viktor Frankl, a prominent Jewish psychiatrist and neurologist in Vienna, was transported to a Nazi concentration camp in September 1942. When the camp was liberated three years later, he was his family’s sole survivor. Dr. Frankl used his experiences to write his bestselling book Man’s Search for Meaning in 1946, which he completed in only nine days. Frankl, whom the Nazis used to treat suicidal prisoners in the camps, concluded that the difference between those who lived and died came down to one thing: meaning.

People who find a meaningful purpose to their suffering are far more resilient than those who fail to do so.

“Everything can be taken from a man but one thing, the last of the human freedoms — to choose one’s attitude in any given set of circumstances, to choose one’s own way,” Frankl wrote in Man’s Search for Meaning. Two suicidal inmates that Frankl encountered in the concentration camps profoundly shaped his observations and conclusions. One patient had a child in a foreign country. The second patient was a scientist who had unfinished work and writing to complete. Both anticipated something beyond their present circumstance waiting for them to finish and fulfill.

Frankl wrote, “This uniqueness and singleness which distinguishes each individual and gives a meaning to his existence has a bearing on creative work as much as it does on human love. When the impossibility of replacing a person is realized, it allows the responsibility which a man has for his existence and its continuance to appear in all its magnitude. A man who becomes conscious of the responsibility he bears toward a human being who affectionately waits for him, or to an unfinished work, will never be able to throw away his life. He knows the ‘why’ for his existence, and will be able to bear almost any ‘how.'”
Continue reading

Facebooktwitterredditpinterestlinkedinmail

Startup Idea Checklist

The unfortunate part of being an idea person in entrepreneurship is the tendency to accumulate more ideas than you have time to implement, and consequently, the need to discard most of them. But there are certain defining characteristics of ideas which can determine their success at a given stage in the business lifecycle. It turns out that many of these characteristics are possible to define in a fairly simple checklist. Before you chase an idea, try running it through this checklist and see if it matches what you’re looking for at your current stage of business and financial/risk situation. For instance, if you’re working on your first venture (and not teaming up with a more experienced entrepreneur, which you should do if you can), you might want to avoid scenarios which require large outside capital investments, as the terms offered are generally not as favorable to first-time entrepreneurs.

1. Does the startup serve an immediate need?

1a. To whom?

1b. What need?

1c. If existing solutions meet the need, how can you better meet the need?

2. Can the startup scale to address a broader need in the future?

3. Are the user and the customer the same person? i.e. Does the person paying also receive the benefit?

4. Does more than one “type” of customer need to join simultaneously?

5. Where does the money come from?

6. Can it be bootstrapped? Are initial expenses necessary to acquire revenue?

7. Can you start it off yourself? If not, where will you find the necessary co-founders?

 

While this is not the end-all-and-be-all of a startup’s viability, it’s a useful tool to narrow down the possibilities before making the mistake of embarking on one that isn’t suitable.

Facebooktwitterredditpinterestlinkedinmail