Skip to main content
menu

CART Newsletters

Fall 2023 Content

vEM Example Image

EM Embraces 3D Volume (vEM)

by Chad Galloway, PhD

Returning from the Microscopy & Microanalysis meeting in Minneapolis this past July, one theme was abundantly clear in the field of electron microscopy, that, 3 dimensional volume EM (vEM) which utilizes serial sectioning collection, is fast becoming a routine imaging modality, challenging 2D imaging where only single thin section views are used documenting structural changes to cells and organelles. Increasingly, these studies utilize multiple modalities, the most poignant being correlative light and electron microscopy (CLEM). These volume techniques are not novel or even newly developed, as CLEM was developed in the early 1990s. Serial sectioning EM dates back to the early 1950’s, soon after scientists first developed methods for embedding of biologic specimens for transmission electron microscopy. However, it was painstakingly slow to perform, as electron micrographs on film negatives necessitated darkroom printing on photopaper prior to performing the laborious tracing of organelles for 3-dimensional representations.  

What is driving this change to vEM? Advances in technology, instrumentation, reagents, and methodologies have been continuously evolving while being complimented by the coordination of scientists in the vEM community1, democratizing this technique. Much of this is taking place in the software involved in the segmentation and reconstruction of data in an automated fashion. The vEM community defines it as newly-developed imaging using transmission or scanning electron microscopy to allow 3-dimensional investigation of cells and tissue ultrastructure up to millimeters in volume with nanometer resolution.  The technique encompasses various methodologies distinguished by the nature of sectioning; Focused Ion Beam scanning electron microscopy (FIB-SEM) where the block face is shaved by a gallium or plasma beam, Serial Block Face SEM (SBF-SEM) where an ultramicrotome sections the block housed inside the SEM itself and array tomography where the individual sections are cut on an ultramicrotome and are collected serially on slides, tape and/or silicon wafers. In the EMR we have completed and published a study using the latter of these methodologies, interrogating the invasion of canaliculi in a S. aureus bone infection model utilizing tape collected sections on the ATUMtome and image collection in back scatter mode in an SEM3. Improvements in detection in backscatter mode and optimization of tissue preparation, to improve contrast, allow for acquisition of images almost indistinguishable, ultrastructurally, from routine transmission electron microscopy. Array tomography has the added advantage that it can be re-interrogated for multiple regions of interest (ROI). The benefit of re-interrogation is accentuated when doing CLEM. Recent developments of fixation resistant fluorescent proteins and protected probes, normally quenched by crosslinking aldehydes and osmium tetroxide, allow for fluorescent imaging post embedment, streamlining the targeting of cells/structures of interest4. A routine request of customers at the EMR is to target only those rare cells that were transfected in a mixed-cell population of a tissue, these vEM advancements will increasingly make the task of finding that “needle in a haystack” easier. As a methodology alone, vEM is becoming an essential tool in neuroscience studies whereas the routine 2D view of a 70 nm thin section, generated by ultramicrotomy, has become inadequate to describe synaptic structure and connectivity. At an intracellular level, vEM is now the preferred tool of mitochondrial researchers, where the restriction of 2-dimensional analysis can result in improper interpretation of mitochondrial shape and size descriptors. In addition to volumetric observations of mitochondria, changes in the Golgi, in a protein processing defect for example, or the ER, in the unfolded protein response, are better observed and described in 3-dimensions. These reconstructions also better describe inter-organelle contacts, critical sites for cross-talk in response to stimuli, often underappreciated in standard 2D electron microscopy.   

The prospect of this electron microscopy renaissance towards 3 dimensional visualization is exciting to us in the EMR. The journal Nature agrees, naming vEM one of top 7 technologies to watch in 20235. We are familiar with the techniques and technologies and plan to move toward acquisition of the necessary instrumentation. If you have a project that would benefit from CLEM and/or vEM, we invite you to reach out for further discussion.   

1 https://www.volumeem.org/#/

https://doi.org/10.1111/boc.201600024

https://doi.org/10.1002/jor.24968

https://doi.org/10.1016/j.cbpa.2023.102369 

5https://doi.org/10.1038/d41586-023-00178-y     

 

 

MSRL Interactive Data Analysis

by Kyle Swovick, PhD

INTRODUCTION

Over the past few years, the proteomics field has made huge strides in virtually ever aspect: sample prep, data acquisition, data processing, and statistical analysis. These advancements, while resulting in amazing science and uncovering new biology, it also results in ever–increasing sizes of data files. Receiving an Excel spreadsheet with protein expression levels, log2 fold changes, and p-values for nearly 10,000 proteins can be daunting for the un-initiated researcher who is not comfortable working within coding environments.

At the MSRL, as discussed in the previous newsletter, we have revamped our acquisition techniques resulting in almost a 100% increase in quantifiable proteins dramatically increasing the size of reports we were sending, resulting in increased challenges for researchers analyzing their own data. To facilitate this, we have been spending the last year building a tool that will allow researchers with no coding experience to be able to visualize and interact with their own data.

DATA DELIVERABLES

Previous Format

Long-time users of the MSRL will be well aware of the usual format of the data reports we send: Excel files with rows corresponding to:

• Protein identifiers (Gene Name, Uniprot ID, Protein Name)

• Number of peptides ID’d per sample

• Protein abundance for each sample

• If there were biological groups or conditions:

– median abundance for each group

– log2 fold change between the requested comparisons

– p-value from a student’s t-test between groups

Updated Format

From now on, for any project, MSRL users will still receive the same Excel file. Additionally, if there are any group comparisons, the user will now also receive a zipped folder that will contain an HTML document with several interactive figures (shown below) to help with understanding their data quality and differential expression. For example, by hovering over a dot in a volcano plot, you will see what that protein is along with the log2 fold change and p-value.

 

Included Figures

 

DATA QUALITY METRICS

To quickly asses the quality of the data, we use several different figures:

  1. A heatmap showing the correlation between samples (also clusters each sample. In an ideal world, all samples within the same group should be clustered together as shown here with the horizontal bar. 

 

Correlation-based Hierarchal Clustering

Figure 1. Correlation-based Hierarchal Clustering

 

2. Distributions of the CVs of protein abundances within a group (a lower CV indicates less variation within a group. Figure 2). We also include the CV when the abundances are measured across all groups. In nearly every experiment, these CVs should be higher than group-specific CVs.

Protein Abundance CVs

Figure 2. Distribution of Protein Abundance CVs

 

3. Distributions of protein abundances for each sample. Ideally, these distributions should be relatively similar to each other, especially for samples within the same group.

Protein Abundances

Figure 3. Distribution of Protein Abundances

 

Volcano Plot

To see what proteins are differentialy expressed between different conditions, volcano plots are commonly used. These plot the log10 of the p-value against the log2 fold change in protein expression levels. In Figure 4, we have an example plot where we set a cut-off level of a log2 fold change of greater than 1 or less than -1 (equivalent to a 2x fold change in expression levels) and a p-value of 0.05, we can see a handful of proteins are either expressed higher in dKO (highlighted in dark blue) or higher in SCR (highlighted in red). Additionally, if you hover over any dot, you can see exactly what protein that refers back to.

Volcano Plot

Figure 4. Volcano Plot of dKO vs SCR.

 

BENEFITS TO INTERACTIVE HTML FILE

We believe that introducing these figures into our deliverable objects provide several benefits for users including:

• Reducing the time needed for researcher’s to analyze their data

• Figures researchers can include in presentations and papers (Note: in figure 4, there is a camera that you can press to make a .png of the figure.)

• In figure 1, there are two grey arrows with “Code” next to them. If those arrows are pressed, it expands to reveal the code used to generate the figures (Figure 5). This allows researcher’s who are familiar with coding (specifically R) to recreate these figures and tweak setting to better suit their own needs.

Expandable Code Block

Figure 5. Expandable Code Block.

 

FUTURE PLANS AND UPGRADES

Making a solid foundation that we could build upon further was of great importance to us. As novel analyses, techniques, and visualization methods come along, we can implement them within this framework. In the short term for example, we are planning to include several more figures including Gene Ontology (GO) network analysis and protein family heatmaps in the near future.

 

FCR (Flow Core Resource) Beer Dip

by Meghann O'Brien

FCR Beer Dip

INGREDIENTS (yields 4 cups)

  • 2 - 8 oz packages cream cheese, softened
  • 3 tablespoons ranch dressing mix or 1 package ranch dressing mix
  • 2 cups shredded sharp cheddar cheese
  • 2 green onions, chopped
  • ~ ½ cup beer 
  • 1 jalapeno, chopped

DIRECTIONS

  1. In a bowl, combine cream cheese and ranch dressing mix
  2. Stir in cheese, green onions, jalapeno 
  3. Add the beer until you reach your desired consistency
  4. Cover and refrigerate overnight
  5. Serve with pretzels or crackers

 

"Way Too Much Work" Short-Rib Chili

By Kyle Swovick

Short Rib Chili

Nothing really fends off those cold and damp WNY November days and keeps the soul warm like chili. And this chili, slightly modified from The Food Lab by J. Kenji Lopez-Alt, while a lot of work, may honestly be one of the best you’ve ever had; I’ve brought a native Texan near tears, my friends will not allow me to attend our yearly trip if I don’t bring this with me…it’s almost made a vegetarian re-think her choices.

While yes there are A LOT of ingredients and steps, I think they are all worthwhile. But if you don’t have the time before heading out to Orchard Park, or just simply don’t want to, I’ve included a few shortcuts that approximate 75% of the final product. Also, short-rib is silly expensive right now (I recommend getting some at the Asia Food Market at Brighton-Henrietta Town Line Rd) so substituting with another highly-marbled cut with lots of connective tissue like chuck is a really good option. It is also important to note that this dish gets BETTER as it sits. So it might be best to prepare the beans and meat on Friday night, cook on Saturday, and then bring the finished dish with you to the tailgate where you’ll just need to warm it up.

INGREDIENTS

• 5 lbs bone-in short rib or 3 lbs boneless short rib, trimmed of excess fat. Optionally, use 3     lbs boneless chuck.

• Salt and black pepper

• 2 tbsp vegetable or canola oil

• 1 large yellow onion, finely diced

• 1 jalapeno or 2 serrano peppers, finely chopped

• 4 cloves garlic, minced

• 1 tbsp dried oregano

• 1 cup Chile Paste (instructions below). Optionally, 1/2 cup chili powder.

– 6 nacho, pasilla, or mulato chiles, seeded and torn into 1-inch pieces

– 3 New Mexico red, California costeno, or choricero chiles, seeded and torn into rough 1-inch pieces

– 2 cascabel, arbol, or pequin chiles, seeded and torn in half

– 2 cups chicken stock

• 4 cups chicken stock (preferably homemade)

• 1 pack gelatin (if using store-bought stock, optional)

• 1/2 cup beer (preferably Labatt Blue or Genny R&W)

• 1/2 cup coffee

• 4 anchovy filets, mashed into a paste with the back of a fork

• 1 tsp Marmite (optional)

• 1 tbsp soy sauce

• 2 tbsp tomato paste

• 2 tbsp cumin seeds, toasted and ground

• 2 tsp coriander seeds, toasted and ground

• 1 tbsp unsweetened cocoa powder

• 3 tbsp instant cornmeal

• 2 bay leaves

• Kidney beans. Preferably 1lb dried, soaked in salted water at room temperature for at least 8 hrs. Optionally 2.5 lbs canned kidney beans, drained.

• 1 28 oz can crushed tomatoes

• 1/4 cup apple cider vinegar

• 1/4 cup whiskey (optional)

• 2 tbsp hot sauce

• 2 tbsp dark brown sugar

• Garnishes as desired

MAKING THE CHILI PASTE

Substituting the standard chili powder with homemade chile paste really is what brings this to a new level, so I highly recommend doing this. Not only does it improve the texture because you’re not adding a ton of chili powder, but you can fine tune the mix to include whatever chiles you want, so you can make the flavor and spice level unique to your kitchen. Also, you can just make one large batch and then put the paste into ice cube trays and then once frozen, store them in bags for a year. This way whenever you make a dish that calls for chili powder, you can sub 2 tablespoons of this paste for every 1 tablespoon of powder.

STEPS

1. Toast the chiles in a Dutch oven or stock pot over medium heat, stirring frequently, until slightly darkened, with an intense toasty aroma, 2 to 5 minutes.

2. Add the chicken stock and simmer until the chiles have softened, 5 to 8 minutes.

3. Transfer the liquid and chiles to a blender and blend, starting on low speed and gradually increase the speed to high, scraping down the sides as necessary, until a completely smooth puree is formed, about 2 minutes. Add water if the mixture is too thick to blend. Let cool.

MAKING THE CHILI

The Night Before Cooking (completely optional):

• If using dried kidney beans: Add the beans to enough salted room temperature water to cover by several inches (the beans will soak up the water and expand overnight).

• Pat the beef dry and season all over with plenty of salt (think about how a sidewalk looks after it’s been snowing for 15 minutes). Place on a wire rack over a baking sheet in the fridge and let sit.

Cooking Day

1. Season the beef on all sides with pepper and salt (if not salted overnight). Heat the oil in a large Dutch oven over medium-high heat until smoking. Add half of the meat and brown well on all sides (depending on the size of your pot, you may need to do more than 2 batches. It is important to not crowd the pot to ensure good browning), reduce the heat if the fat begins to smoke excessively or the meat begins to burn.

2. Transfer to a plate and repeat step 1 with the remaining meat.

3. Reduce the heat to medium-low, add the onion, and cook, scraping up the browned bits from the bottom of the pan with a wooden spoon and then stirring frequently, until softened but not browned (6-8 minutes).

4. Optional: if using store-bought stock, pour into a dish and pour a packet of gelatin over top and let it bloom. (This only improves the mouth-feel of the final dish and will not impact the flavor so feel free to skip)

5. Add the fresh chile, garlic, and oregano and cook, stirring, until fragrant (~ 1 minute).

6. Add the chile paste and cook, stirring and scrapping constantly until it leaves a coating on the bottom of the pot (2-4 minutes).

7. Add the chicken stock and scrape any browned bits from the bottom of the pot.

8. Add the anchovies, beer, coffee, soy sauce, tomato paste, ground spices, cornmeal, and Marmite if using. Whisk to combine and keep warm over low heat.

9. Adjust an oven rack to the lower-middle position and preheat the oven to 225F.

10. Remove the meat from the bones and reserve the bones (if using bone-in short-rib). Chop all the meat into rough 1/4-1/2 inch pieces.

11. Add any accumulated juices from the cutting board to the dutch oven and add the chopped beef (and bones if you have them), and bay leaves to the chili.

12. Bring to a simmer, cover, and place in the oven for 1 hr.

13. If using dried beans: Drain the beans and transfer to a pot and cover with water by 1 inch. Season with salt and bring to a boil over high heat then reduce to a simmer and cook until the beans are nearly tender (about 45 minutes). Drain.

14. Remove the chili from the oven and add the tomatoes, vinegar, and beans.

15. Return to the oven with the lid slightly ajar and cook until the bean and beef are tender and the stock is rich and slightly thickened (1.5-2 hrs longer). Add water if necessary to keep the beans and meat mostly submerged (a little bit poking out is OK).

16. Remove the bay leaves and bones. Add the whiskey, hot sauce, and brown sugar and stir to combine. Season to taste with salt, pepper, and vinegar.

17. Let sit overnight in an airtight container.

Game Day

• Pat the beef dry and season all over with plenty of salt (think about how a sidewalk looks after it’s been snowing for 15 minutes). Place on a wire rack over a baking sheet in the fridge and let sit.

 

 

Spring 2023 Content

FC High Dimensional Analysis 

Our Flow Cytometry Resource Offers High-Dimensional Analysis

by Jim Java

Flow cytometry experiments often conclude with the production of FCS computer files containing investigators' raw results. CART's Flow Cytometry Resource (FCR) can now provide standardized, reproducible analyses of FCS files from flow experiments.

After a flow cytometry experiment, it's not uncommon for investigators to import their FCS files into software such as FlowJo or FCS Express, and then proceed with an analysis "by hand", which can be time-consuming and somewhat subjective. In the interest of saving time and limiting subjectiveness, the FCR data analysis team has developed a soup-to-nuts analysis pipeline for the R programming environment: we call it "flowpipe" and it can semi-automatically handle most analytical tasks from pre-processing/pre-gating to phenotype clustering to differential-expression modeling.

The results of a flowpipe run include UMAP visualizations, spreadsheets summarizing the phenotype clusters, per-cluster FCS files, and a detailed summary report of sample or group differences. Although we developed flowpipe to be useable by researchers savvy of R programming (and we're glad to help you set it up!), we recommend that you request a flowpipe run as part of your FCR scheduling, so that our data-analysis team can manage the process.

The length of a flowpipe analysis depends on the number and size of the FCS files provided to the software, but a typical run takes a few hours. Our software has aggregated a number of common techniques and algorithms (well-represented in the peer-reviewed literature) into a flexible parallel-processing framework that's meant to reduce investigators' analytical workload; so, contact us if you'd like more information about sending your FCS files through the flowpipe pipeline.

As part of the flowpipe analysis process, we ask investigators to provide "metadata" relevant to their flow-cytometry experiment: that is, for example, whether samples are cases or controls; a list of pre-defined phenotype gates for drilling down to interesting cell subsets; and patient data that can be incorporated into the differential-expression models. For more information, please check out the flowpipe GitHub respository or contact Jim Java. We can also provide statistical analyses outside the scope of the pipeline—inquiries are welcome!

Proteomic Data Acquisition

Proteomic Table 1

Our Mass Spectrometry Resource Laboratory Overhauls Proteomic Data Acquisition

by Kyle Swovick, PhD

Over the past two years, the MSRL has been undergoing an overhaul of our proteomic data acquisition methods. 

For decades, proteomic data has been primarily collected through data-dependent acquisition (DDA). In this method, the mass spectrometer isolates and fragments just a single peptide for identification and then repeats this process throughout the entire gradient. Recently, a method termed data independent acquisition (DIA) has been introduced that promises vast improvements over DDA proteomics. 

When performing DIA experiments, the mass spectrometer isolates and fragments every peptide within a predefined mass range. Performing fragmentation and identification this way, in theory, offers several benefits including greater coverage and less missing values. These gains are primarily a result of the stochastic nature of peptide fragmentation in DIA. DDA proteomic experiments historically have been plagued by high intensity peptides since those will have a greater propensity to be chosen for fragmentation, thereby ignoring many lower intensity peptides. 

DIA experiments however alleviate this problem; regardless of a peptide’s intensity it will be fragmented, thus leading to more possible fragment ions that can be used for identification. When the instrumentation improvements offered by DIA are paired with the recent advancements of neural network and machine learning programming, the results are truly extraordinary. 

Using cutting-edge techniques, the MSRL saw nearly 100% and 50% increase in tissue and cell culture samples respectively (Figure 1A). Their DIA pipeline also saw a 25% improvement in data completeness meaning (Figure 1B). 

Combined, these improvements mean users can result in up to a 150% improvement in their calculations when measuring differentially expressed proteins. If you are curious about what kind of coverage DIA can yield your specific biological matrix, Table 1 includes many of the common sample types the MSRL handles. And, if you are intrigued by what DIA can offer for your own research, you can reach out to MSRL with any questions. 

Stay tuned for the next installment where the MSRL will talk about the improvements they’ve made to their data analysis pipeline to help researchers delve into and interact with their data.

Beer in the Sheath Tank

Experiments in the Kitchen: Beer in the Sheath Tank

by Steven Polter

Whether or not some of us want to admit it, we all still play pretend in some way or other. I, myself, enjoy to pretend I am a brewmaster. I’ve been a homebrewer for years and many of my associates, including my brewing partner, will tell you that my relationship with brewing flirts with the line between hobby and habit. In 2022 I had the pleasure of being asked to play brewmaster for CART and provide a few kegs of beer for a retreat last July. Recently I have been asked to don the mask once again to share and discuss one of those recipes, so I chose to share Supercrisp 570, an American Kolsch.

Perfect for the spring days just ahead of us, this beer beckons the reawakening from winter’s dim but cozy torpor. Bright straw yellow and exuberant, Supercrisp 570 positively pops with a floral nose and lemon-citrus flavor backed by pleasant, bready malt. This brew was designed to shine no matter when or where you drink it! 

Ingredients (For a 5-gallon batch)

  • 12 lb. 2-Row Brewer’s Malt (milled)
  • 3 oz. Lemondrop Hops (T-95 pellets, 2 oz. for the boil and 1 oz. for dry hop)
  • 1 pouch WLP 810 San Francisco Lager Yeast 
    https://www.whitelabs.com/yeast-single?id=220&type=YEAST&style_type=2
  • Water (~7.5 gal. total)

Brewing notes

  • Set yeast and hops aside to come to room temperature during the process
  • Step mash* with 4 gal. H2O. USE LOW HEAT AND STIR CONSTANTLY WHEN RASING THE MASH TEMPERATURE TO THE NEXT STEP! THERE WILL BE NO SCORCHING!
    • Heat H2O to 135F* and add milled malt. Stir to mix well. Rest 20 minutes at 125 F
    • Raise temp and rest 30 minutes at 140 F
    • Raise temp and rest 30 minutes at 150 F
  • Mash out and set sweet wort aside
  • Sparge with 3.5 gal. H2O at 170 F for 10 minutes. Recirculate/vorlauf until the wort runs clear after the 10 minute rest, then sparge out into your kettle containing the sweet wort from the first run
  • Boil 60 minutes. Add hops as follows:
    • 1 oz Lemondrop 60 minutes (this notation means the hops spend the listed amount of time in the boiling beer, in this case these hops are added just after the wort begins to boil)
    • 1 oz Lemondrop 30 minutes
  • When boil is complete, cool wort to ~70 F
  • With clean hands and using a clean, sanitized funnel, transfer the wort to a clean and sanitized fermentation vessel. Take a sample of wort at this point for testing of specific gravity. Place a foil cap over the mouth of the vessel after the wort is transferred while the yeast is readied for pitching
  • Again with clean hands and using clean/sanitized scissors, cut the yeast pouch carefully over the open mouth of the fermentation vessel and gently, carefully pitch the yeast into the wort
  • Replace the foil cap over the mouth of the vessel and CAREFULLY shake the vessel vigorously for 30 seconds to 1 minute (this serves to oxygenate the wort which is crucial for initial yeast health/activity as well as mix things up nicely)
  • Ready a clean, sanitized airlock and stopper assembly and quickly peel back the foil cap and place the stopper/airlock combo firmly into the mouth of the vessel
  • Label your vessel (I speak from experience) with the name, date, and original gravity of the wort
  • Give your vessel a gentle slap on the side and take a moment to feel accomplished, maybe crack a beer
  • Consider covering your fermentation vessel with an old t-shirt or whatever else will help keep light out of it. Seriously, being a fungus yeast is not in love with bright light or direct sunlight. Definitely not bright, direct sunlight, which will also zap your tasty, hard-earned flavor compounds!

A step mash is a technique that entails resting the mash at increasing temperature steps to maximize sugar extraction and provide a greater breadth of sugar types in the wort as well as leaving some non-fermentable sugar, which provides a pleasant, bready sweetness in the finished beer. When heating the water for the first step in the mash, be sure to overshoot by about 10 degrees F, as the thermal mass of the grains when added to the water (in these relative volumes) will sink about 10 degrees off the temperature of the mash after mixing.  

Fermentation

  • Ferment at ~60 F (basement/cellar temperature is prime for this!) for 3 weeks
  • Transfer to secondary fermentation and add 1 oz Lemondrop. Secondary for 19 to 21 days, still at ~60 F
  • Transfer to keg and pressurize to begin carbonation. If possible, place the keg into a temperature controlled lagering chamber and step the temperature down by 2-3 degrees F each day until it reaches 35-36 F. During this process, pressurize the keg each day (to somewhere around 25 PSI) to slowly carbonate the beer while you cold-condition it. If there is no access to a temperature controlled lagering chamber the keg can be pressurized and cold-conditioned in a regular old refrigerator or kegerator without temperature control. The beer will still be good!
  • Cold-condition and carbonate in this way until desired carbonation level is reached. Cold-conditioning can be continued after carbonation level is reached, up to 3-4 weeks. Periodically draw small amounts of beer from the keg to pull out any sediment that has crashed to the bottom of the keg, and to taste, of course!
  • Serve and enjoy!

If you are not experienced in brewing and have question marks for any reason, feel free to reach out to me at Steven Polter and I will happily discuss, clarify, and provide additional information!