The wonderful life of protocols

PXL_20210114_121136449~4 copy.jpg
Artwork by Mark Palfreyman

It’s not just organisms that evolve.

One of the most important lessons for young scientists to learn is that everything in a scientific protocol is there for a reason. Every chemical and every buffer in every solution, every incubation, every wash, every antibody, and on and on and on is there by design, and when you’re mastering a new technique or assay, it’s essential to ask why. Don’t stop asking until you understand the logic.

But while sometimes the reason for inclusion is scientific – [for example: why are phosphate buffers so popular? Because they have a very low (almost zero) temperature coefficient, and thereby maintain their pH from 0°C upwards; Tris, conversely, does not] – sometimes it’s an accident of history. A contingency.

A crucial skill for young scientists, and what marks the gradual transition from novices to experts, is learning to get a sense for which is which. To see what has been systematically optimised, and what has been added and kept on being used simply because it works. The choices that have been made are always rational, but the original reasons for retention don’t necessarily still apply. The QWERTY keyboard was first developed as an optimal arrangement for typewriters to prevent key jams, but is superfluous in a digital setup. Yet it remains, and will continue to do so.

The reason for those contingencies is that protocols evolve, and consequently, historical artefacts can become fixed. This is a genuine example of evolution in action, just as our crappy knees betray our formerly simian gait and cetaceans bear the unmistakeable anatomical signatures of a former life on land, well fitting Darwin’s maxim that evolution is “prodigal in variety, but niggard in innovation”.

Consider this protocol for the purification of chicken brush border myosin – the 40 min incubation at room temperature with gentle shaking is in fact the trip by car from the slaughterhouse back to the laboratory. Is it necessary? Well, at the time it would have been difficult to avoid, so in it stays.

Citi_pic.png
Citi & Kendrick-Jones, 1986.

Why do electron microscopists use cacodylate buffer? Because it is an arsenical that didn’t go off at room temperature in the days before refrigerators became a standard feature in labs.

Why is thiamine pyrophosphate (TPP) used in histochemistry? Because it was the cheapest substrate for nucleoside diphosphatase in the Sigma catalogue at the time the protocol was developed.

Why don’t buffers for cell-free assays use NaCl? NaCl may be a near-ubiquitous salt in biological buffers, but chloride ions actually profoundly inhibit protein synthesis in vitro (presumably because life evolved under low Cl- conditions?) and consequently acetate/glutamate/phosphate salts are used. Scientists should arguably be using potassium acetate in their buffers instead of NaCl.

Conversely, why is glutaraldehyde the preferred aldehyde for cell fixation? The choice dates back to David Sabatini’s work in the early 1960s at Yale. At the time, the default fixatives for electron microscopy were osmium tetroxide and, perhaps strange to modern eyes, potassium permanganate – but neither was optimal, because tissue sections became very brittle. Sabatini’s planned work at the Rockefeller with George Palade was initially detailed by a lack of space, so at the time of his arrival he went to Yale for a few months. There, he tested a series of aldehydes to see what gave the best result. Glutaraldehyde came out on top. Its adoption might be serendipitous, but it’s no historical contingency.

Sabatini_pic.png
Sabatini et al., 1963.

Throughput is possibly a factor in the retention of contingencies: when protocols are slow, it becomes harder to carry out systematic optimisation. This might be why electron microscopy is so replete with such anecdotes. But nearly all protocols – and especially in this hypercompetitive, time-compressed age – obey the maxim “If it ain’t broke, don’t fix it”. Here science once more resembles the febrile world of Formula 1 racing car design. Given limited time to make adjustments, most teams simply copy what pre-eminent aerodynamicist Adrian Newey is doing rather than systematically optimising each component.

The concept of contingency in evolution was best popularised by Stephen Jay Gould in his book “Wonderful Life”. There, Gould eloquently hypothesised that the various weird wonders populating the Cambrian explosion might have been equally successful as modern lineages if given the chance, and that current taxons have persisted by mere historical accident.

Though the idea of contingency in biological evolution has been largely abandoned in the wake of the stem group hypothesis and the concept of convergent evolution, Gould’s notions do in fact apply very neatly to scientific protocols. What has persisted is often an accident of history, and more interchangeable than we might expect. Gould’s worms are what creates wiggle room in the universe of protocols.

Consequently,when young scientists are using a buffer, assay, technique for the first time, they should always be asking what parts are there for scientific reasons, and what are the equivalent of one of Gould’s weird wonders. Being able to discriminate between the parts that are systematically optimised and the ones that have been simply retained is not just a glimpse into methodological evolution, but also a means to customising and optimising things for the application at hand.

 

Acknowledgement:
This posting suggested by and co-authored with Graham Warren. Thanks to Jake Kendrick-Jones for the myosins anecdote; Alex Novikoff (d. 1987) was the source for the TPP insight.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s