DanAllan.com

Communicating Science to Nonscientists: Ideas Learned From Jeremy Nathans

Jeremy Nathans, a neuroscientist and exceptionally talented speaker, spoke to small group of scientists, including me.

Teaching by example, he gave four talks aimed at four different (imagined) audiences: the president of the University, a kindergarten class, a high school class, and a college class.

I came to the talk having some opinions on this topic: I gave about 8000 science presentations at the Corning Museum of Glass to varied audiences, and I’ve been organizing outreach activities for Grades 6-12 for many years. But I learned a lot from Dr. Nathans. Here are ideas I particuarly liked.

Know your customer: nonscientists come in all shapes and sizes.

Use a variety of metaphors. For example, in disease, use the analogy to auto mechanics. If you know nothing about how the car works, your ability to fix it is extremely limited.

Tell a campfire story. If you can capture their attention, they start wondering, “What are you doing? And why?”

Simplify the math to the point where your listener could re-explain it.

Provide a “gut feeling” (a reference point) for any numbers you use.

Explain scientific notation if you use it! (“The little number is the number of zeros.”)

Inspire with pictoral analogies.

If you asking for money, don’t forget to mention why it will “take five years” and why it hasn’t already been done by someone else.

Resist the temption to present the tiny little weed that you are working on. Talk about what is exciting in the field.

The point: Science is a method. Reinforce that.

The above are gleaned from my notes on Dr. Nathans’ talk, which is now six month past.

Concise Curve Fitting in Python

When I need to fit a function to some data, Python can seem cumbersome. I’m not sure I could win a race against someone working in Igor or even Excel.

pyMC has received a lot of attention, but for traditional LM least-squared fitting, most users use scipy, which lacks some modern amenities. I need a tool that provides succinct syntax for straightforward tasks, handles data with missing values (a la pandas), and returns results in a form that I can easily plot.

Matt Newville’s lmfit project is a big step forward from scipy. Like many graphical data analysis programs, it can set bounds on individual fit parameters or hold them fixed.

Inspired by some ideas by @vnmanoharan in this discussion, I wrote a fresh interface to lmfit that addresses all the needs listed above. It has been merged into the development version of lmfit. A demo of the key features follows.

The Model class is a flexible, concise curve fitter. I will illustrate fitting example data to an exponential decay.

In [1]:
def decay(t, N, tau):
    return N*np.exp(-t/tau)

The parameters are in no particular order. We'll need some example data. I will use N=7 and tau=3, and I'll add a little noise.

In [2]:
t = np.linspace(0, 5, num=1000)
data = decay(t, 7, 3) + np.random.randn(*t.shape)

Simplest Usage

In [3]:
from lmfit import Model

model = Model(decay, independent_vars=['t'])
result = model.fit(data, t=t, N=10, tau=1)

The Model infers the parameter names by inspecting the arguments of the function, decay. Then I passed the independent variable, t, and initial guesses for each parameter. A residual function is automatically defined, and a least-squared regression is performed.

We can immediately see the best-fit values

In [4]:
result.values
Out[4]:
{'N': 6.8332246334656945, 'tau': 3.0502578166074512}

and easily pass those to the original model function for plotting:

In [5]:
plot(t, data)  # data
plot(t, decay(t=t, **result.values))  # best-fit model
Out[5]:
[<matplotlib.lines.Line2D at 0xb9a28cc>]

We can review the best-fit Parameters in more detail.

In [6]:
result.params
Out[6]:
Parameters([('tau', <Parameter 'tau', value=3.0502578166074512 +/- 0.0675, bounds=[-inf:inf]>), ('N', <Parameter 'N', value=6.8332246334656945 +/- 0.0869, bounds=[-inf:inf]>)])

More information about the fit is stored in the result,which is an lmfit.Mimimizer object.

Specifying Bounds and Holding Parameters Constant

Above, the Model class implicitly builds Parameter objects from keyword arguments of fit that match the argments of decay. You can build the Parameter objects explicity; the following is equivalent.

In [7]:
from lmfit import Parameter

result = model.fit(data, t=t, 
                   N=Parameter(value=10), 
                   tau=Parameter(value=1))
result.params
Out[7]:
Parameters([('tau', <Parameter 'tau', value=3.0502578166074512 +/- 0.0675, bounds=[-inf:inf]>), ('N', <Parameter 'N', value=6.8332246334656945 +/- 0.0869, bounds=[-inf:inf]>)])

By building Parameter objects explicitly, you can specify bounds (min, max) and set parameters constant (vary=False).

In [8]:
result = model.fit(data, t=t, 
                   N=Parameter(value=7, vary=False), 
                   tau=Parameter(value=1, min=0))
result.params
Out[8]:
Parameters([('tau', <Parameter 'tau', value=2.9550822200975864 +/- 0.0417, bounds=[0:inf]>), ('N', <Parameter 'N', value=7 (fixed), bounds=[-inf:inf]>)])

Defining Parameters in Advance

Passing parameters to fit can become unwieldly. As an alternative, you can extract the parameters from model like so, set them individually, and pass them to fit.

In [9]:
params = model.params()
In [10]:
params['N'].value = 10  # initial guess
params['tau'].value = 1
params['tau'].min = 0
In [11]:
result = model.fit(data, params, t=t)
result.params
Out[11]:
Parameters([('tau', <Parameter 'tau', value=3.0502578132121547 +/- 0.0675, bounds=[0:inf]>), ('N', <Parameter 'N', value=6.8332246370863503 +/- 0.0869, bounds=[-inf:inf]>)])

Keyword arguments override params, resetting value and all other properties (min, max, vary).

In [12]:
result = model.fit(data, params, t=t, tau=1)
result.params
Out[12]:
Parameters([('tau', <Parameter 'tau', value=3.0502578166074512 +/- 0.0675, bounds=[-inf:inf]>), ('N', <Parameter 'N', value=6.8332246334656945 +/- 0.0869, bounds=[-inf:inf]>)])

The input parameters are not modified by fit. They can be reused, retaining the same initial value. If you want to use the result of one fit as the initial guess for the next, simply pass params=result.params.

A Helpful Exception

All this implicit magic makes it very easy for the user to neglect to set a parameter. The fit function checks for this and raises a helpful exception.

In [13]:
result = model.fit(data, t=t, tau=1)  # N unspecified
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-13-6d8fedbef3f8> in <module>()
----> 1 result = model.fit(data, t=t, tau=1)  # N unspecified

/home/dallan/lmfit-py/lmfit/model.pyc in fit(self, data, params, sigma, **kwargs)
    191             raise ValueError("Assign each parameter an initial value by " +
    192                              "passing Parameters or keyword arguments to " +
--> 193                              "fit().")
    194 
    195         # Handle null/missing values.

ValueError: Assign each parameter an initial value by passing Parameters or keyword arguments to fit().

An extra parameter that cannot be matched to the model function will throw a UserWarning, but it will not raise, leaving open the possibility of unforeseen extensions calling for some parameters.

Weighted Fits

Use the sigma argument to perform a weighted fit. If you prefer to think of the fit in term of weights, sigma=1/weights.

In [14]:
weights = np.arange(len(data))
result = model.fit(data, params, t=t, sigma=1./weights)
result.params
Out[14]:
Parameters([('tau', <Parameter 'tau', value=3.096728970589659 +/- 0.113, bounds=[0:inf]>), ('N', <Parameter 'N', value=6.7514922300319453 +/- 0.256, bounds=[-inf:inf]>)])

Handling Missing Data

By default, attemping to fit data that includes a NaN, which conventionally indicates a "missing" observation, raises a lengthy exception. You can choose to drop (i.e., skip over) missing values instead.

In [15]:
data_with_holes = data.copy()
data_with_holes[[5, 500, 700]] = np.nan  # Replace arbitrary values with NaN.

model = Model(decay, independent_vars=['t'], missing='drop')
result = model.fit(data_with_holes, params, t=t)
result.params
Out[15]:
Parameters([('tau', <Parameter 'tau', value=3.0547114484523323 +/- 0.0677, bounds=[0:inf]>), ('N', <Parameter 'N', value=6.8308291273906265 +/- 0.087, bounds=[-inf:inf]>)])

If you don't want to ignore missing values, you can set the model to raise proactively, checking for missing values before attempting the fit.

In [16]:
model = Model(decay, independent_vars=['t'], missing='raise')
result = model.fit(data_with_holes, params, t=t)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-16-788e0b6b627f> in <module>()
      1 model = Model(decay, independent_vars=['t'], missing='raise')
----> 2 result = model.fit(data_with_holes, params, t=t)

/home/dallan/lmfit-py/lmfit/model.pyc in fit(self, data, params, sigma, **kwargs)
    196         mask = None
    197         if self.missing != 'none':
--> 198             mask = self._handle_missing(data)  # This can raise.
    199             if mask is not None:
    200                 data = data[mask]

/home/dallan/lmfit-py/lmfit/model.pyc in _handle_missing(self, data)
    117         if self.missing == 'raise':
    118             if np.any(isnull(data)):
--> 119                 raise ValueError("Data contains a null value.")
    120         elif self.missing == 'drop':
    121             mask = ~isnull(data)

ValueError: Data contains a null value.

The default setting is missing='none', which does not check for NaNs. This interface is consistent with the statsmodels project.

Null-chekcing relies on pandas.isnull if it is available. If pandas cannot be imported, it silently falls back on numpy.isnan.

Data Alignment

Imagine a collection of time series data with different lengths. It would be convenient to define one sufficiently long array t and use it for each time series, regardless of length. The pandas provides tools for aligning indexed data. And, unlike most wrappers to scipy.leastsq, Model can handle pandas objects out of the box, using its data alignment features.

Here I take just a slice of the data and fit it to the full t. It is automatically aligned to the correct section of t using Series' index.

In [17]:
from pandas import Series

model = Model(decay, independent_vars=['t'])
truncated_data = Series(data)[200:800]  # data points 200-800
t = Series(t)  # all 1000 points
result = model.fit(truncated_data, params, t=t)
result.params
Out[17]:
Parameters([('tau', <Parameter 'tau', value=3.2221825353028226 +/- 0.159, bounds=[0:inf]>), ('N', <Parameter 'N', value=6.5296051307920768 +/- 0.221, bounds=[-inf:inf]>)])

Data with missing entries and an unequal length still aligns properly.

In [18]:
model = Model(decay, independent_vars=['t'], missing='drop')
truncated_data_with_holes = Series(data_with_holes)[200:800]
result = model.fit(truncated_data_with_holes, params, t=t)
result.params
Out[18]:
Parameters([('tau', <Parameter 'tau', value=3.2397946733749583 +/- 0.16, bounds=[0:inf]>), ('N', <Parameter 'N', value=6.5107676500014584 +/- 0.219, bounds=[-inf:inf]>)])

Shelf Life

In his essay On Smarm, Tom Socca quotes Malcolm Gladwell:

Negative stuff is interesting the first time, but you’ll never re-read a negative article. You’ll re-read a positive one. Part of the reason that my books have had a long shelf life is that they’re optimistic, and optimism permits that kind of longevity.

Socca responds:

One curious fact about this long view is that it’s quite untrue. I can’t recall ever, unless compelled by duty, rereading a Malcolm Gladwell article. What I have reread is Mencken on the Scopes Trial, Hunter Thompson on Richard Nixon, and Dorothy Parker on most things—to say nothing of Orwell on poverty and Du Bois on racism, or David Foster Wallace on the existential horror of a leisure cruise. This belief that oblivion awaits the naysayers and the snarkers shouldn’t survive a glance at the bookshelf.

I reread difficult books for better understanding; I reread beloved books as comfort food; I reread in search of certain half-forgotten turns of phrase. I do agree that Gladwell’s stunts lose their punch after the first reading, after their absurd premise is explained.

I’ll follow Socca’s unoptimistic reading list. The first entry, Mencken coverage of the Scopes Trial, is in the public domain. I rendered the plain text archive into a more readable format using Markdown.

Who Made My Pants?

Pore over this report on the extent to which apparel companies prevent and address modern slavery through their supply chain.

Of the mainstrain companies, Hanes and Timberland come out well. Express, Aeropostale, Fruit of the Loom, and of course Walmart are among the losers.

This summarizing chart is on Page 4 of the report. If it disappears from Internet, here’s a local copy.

H/T Mother Jones

Hearing Hydrogen

The spectral lines characteristic of hydrogen are spaced according to the Rydberg formula, $\displaystyle\frac{1}{\lambda} = R\left(\frac{1}{n_1^2} - \frac{1}{n_2^2}\right)$.

Spectral Lines of Hydrogen

The wavelengths $\lambda$ given by the formula can be given as frequencies $\displaystyle f = \frac{c}{\lambda} = R\left(\frac{1}{n_1^2} - \frac{1}{n_2^2}\right)$ which can be rescaled into musical frequencies, which we will play. This has been done before, but I will give more attention to the science and musical perception and less attention to the programming.

Play a pure tone

In [1]:
from IPython.display import Audio
from numpy import sin, pi
In [2]:
amplitude = 2**13
rate = 41000  # Hz
duration = 2.5  # seconds

time = np.linspace(0, duration, num=rate*duration)

def tone(freq):
    return amplitude*sin(2*pi*freq*time)

As a test drive, let's just play a single pitch, Concert A.

In [3]:
A = 440  # frequency of Concert A
Audio(tone(A), rate=rate)
Out[3]:

Play the spectral lines

An "audible" Rydberg formula in Python:

In [4]:
scaling = 4*A  # rescale the frequencies into an audible range
def freq(n1, n2):
    return scaling*(1./n1**2 - 1./n2**2)

Generate a spectrum of frequencies for $n_1 = 1, 2, 3$, corresponding to the yellow, black, and maroon lines in the illustration above. Verify that the lowest and highest frequencies are within the audible range of 20-20,000 Hz.

In [5]:
series = [1, 2, 3] #  Lyman, Balmer, Paschen series
spectrum = [freq(n1, n2) for n1 in series for n2 in range(n1 + 1, 9)]
min(spectrum), max(spectrum)

tones = [tone(f) for f in spectrum]
composite_tone = np.sum(tones, axis=0)

Listen.

In [6]:
Audio(composite_tone, rate=rate)
Out[6]:

The sound is eerie and disssonant, but more musical than one might expect. Why?

Most natural sounds, especially musical ones, consist of several frequencies (i.e., pitches) related to each other by the harmonic series: $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}$, etc. Our brains usually group tones related by the harmonic series together, interpreting them as part of the same sound. Further, we perceive the differences between these frequencies as beating, a pulsing sensation that is particularly obvious when two tones are almost but not quite in unison.

Each tone in the sound above corresponds exactly to one of beating patterns. The Rydberg formula takes the difference between two fractions from the harmonic series. (To be specific, the fractions are from the series $\frac{1}{2}, \frac{1}{4}, \frac{1}{9}$ …, a subset of the harmonic series.)

Although the spectrum of hydrogen is unrelated to music or acoustics, it happens to follow a pattern that also occurs is musical sound, and so it makes more sense to our ears than random tones or noise.

Winter Is Coming

I coauthored an astrophysics paper, Winter is Coming, submitted to the physics arxiv on April 1, in which the irregular seasons of Game of Thrones are explained in astrophysical terms. Veselin Kostov did all the hard science; I am responsible for the plots and some of the writing.

We were Nerd Famous for a day. A roundup:

The Audacity of Dispair

What else but the title of a blog by David Simon, former Baltimore Sun reporter?

The blog is old news — his first post was in April — but I only just discovered it and caught up. Two of my favorites:

Turkey Time

Ways to estimate turkey cooking times:

  • USDA guidelines
  • the simple rule “18 minutes per pound”
  • (using weight in pounds)

The “18 minutes” rule works for some weights, but it doesn’t scale right for large turkeys. A wider range of accuracy is achieved by the simple formula, suggested by the late physicist and SLAC director Pief Panofsky. All of these assume the oven is set to 325 F.

The exponent in Panofsky’s formula comes from the ratio of the turkey’s surface area, through which heat flows, to its volume. The 1.5 is empirical, fitting the mathematical curve to data from actual cooked turkeys.

Just spraying a turkey is all.

With a little more trouble, we can solve the problem without referring to actual cooking times, using only the basic material properties of turkey. We can use our solution to estimate cooking times at other temperatures, such as smoking a turkey on a 225 F grill.

What if we imagine that the turkey is a round ball of cold meat sitting in hot air? Simplifying its shape and ignoring the details of the cooking process, we have a straightforward physics problem. The properties we need (density , conductivity , and diffusivity ) were measured and published by the Canadian Food Research and Development Center.

How big should this imagined ball of turkey be? We could try mashing the turkey’s whole mass into one solid ball. But since that approach ignores the bones, which conduct heat faster than meat, we should expect it to overestimate the cooking time. (And it does.) Instead, we can try including only the meat, which comprises roughly half the mass. For a 6- to 22-pound turkey, we’ll be modeling a 13- to 21-cm meat ball. In comparison to a turkey breast, the thickest part of the meat, that seems about right.

Our turkey ball will start at 50 F, surrounded by 325 F air in the oven. I will solve the heat equation to compute when the center of the ball reaches 170 F, safe to eat.

A fine fit like that seems too good to be true. Perhaps our approximations balanced each other by chance. Now, extend the model to learn something new: If we set the surrounding air to 225 F, as on charcoal grill, the equation predicts longer cooking times that agree with experience.

The real physics of turkey cooking is explained in a nonmathematical post by Modernist Cuisine.

Mathematical Appendix

Parameters:

  • raw turkey temperature
  • oven temperature
  • thermal diffusivity
  • thermal conductivity
  • heat transfer coefficient of free air
  • density

The temperature at the center of the turkey ball after time in the oven measured a distance from the center is given by

where are given by the roots of the equation and must be computed numerically.

We are mainly interested in the temperature at the center, the last part to cook. This is a slightly simpler expression.

When , the turkey is done.