Did you know that you are probably a human knockout for ~20 genes?

A recent article in Nature News in Focus (30 October 2014; Vol. 514; p548) reports on the American Society of Human Genetics Meeting in San Diego, California. Daniel McArthur lamented our continuing use of mice and rats with too little of today’s focus being on human. His team has found ~ 150,000 naturally knocked out genes by sequencing ORFs of ~90,000 people.  Amazingly, he estimates that every person is a knockout for at least one copy of ~200 genes and also for both copies of ~20 genes.

If you are worried that your research relies too heavily on mouse models and are fed up with all the off-target effects of siRNAs and CRISPR/Cas9 then perhaps you should consider trying to make your own knockout or knockins from human induced pluripotent stem cells? Bac-mediated homologous recombination is becoming a powerful tool to engineer human embryonic stem cells (hESCs) for example for gene targeting [1-4]. The sizes of homology arms (ha) used play a key role in success. The homology arms have to be enlarged in comparison to constructs used for the modification of murine ESCs. For human cells investigators have successfully used constructs with 4.5 kb short ha and 8-10 kb long ha [1-3].

Gene Bridges offers powerful state-of-the-art Red/ET recombineering technology to assist you to prepare such targeting constructs with long homology arms. Check out the Cambio website for Gene Bridges BAC modification products!

Generation of targeting construct for recombination in hESCs using Gene Bridges Red/ET-kits:

 

11 2014 Gene Bridges graphic

References:

[1] Genetic correction of Huntington’s disease phenotypes in induced pluripotent stem cells; An M C et al., Cell Stem Cell, 2012, 11, 253-63*

[2] NKX2-5eGFP/w hESCs for isolation of human cardiac progenitors and cardiomyocytes; Elliott D A, et al., Nat Methods, 2011, 8, 1037-40*

[3] Modeling disease in human ESCs using an efficient BAC-based homologous recombination system; Song et al., Cell Stem Cell, 2010, 6, 80-89

[4] Genetic modification of human embryonic stem cells for derivation of target cells; Giudice A and Trounson A, Cell Stem Cell, 2008, 2, 422-33*

Papers 1, 2 and 4 all use Gene Bridges technology

Artificial sweeteners and gut flora – a wake-up call?

If you always wondered what all that sugar-free gum and cola does to your body, then you might be interested in a paper by Suez et al. that appeared in Nature this week.
The new report provides compelling evidence that non-caloric artificial sweetener consumption in mice alters gut microbiota and gives rise to glucose intolerance independent of a high fat diet. The report also describes how many of their observations hold true in humans. Suez et al. reasoned that since diet modulates gut microbiota, and that alterations in gut flora exert profound effects on host physiology and metabolism, it would be interesting to compare microbiota in a group fed artificial sweeteners with a control group. To achieve this differences in gut microbiota were found by isolating faecal DNA from murine and human stool samples using the MOBIO PowerSoil kit and sequencing (of 16S ribosomal RNA gene and by shotgun metagenomic methodology) using Illumina technology.
The authors point out that it was hoped that the introduction of artificial sweeteners into our diets would reduce caloric intake and normalise blood glucose levels, while keeping our ‘sweet tooth’ at bay. They go on to say that together with other major shifts in our diets, this increase in artificial sweetener consumption coincides with the dramatic increase in the obesity and diabetes epidemics. Suez et al conclude their paper with some food for thought, or is thought for food a better expression?: “Our findings suggest that non-caloric artificial sweeteners may have directly contributed to enhancing the exact epidemic that they themselves were intended to fight”.
You can find more info on MO BIO PowerSoil kits by clicking here.
If you want to read the Nature paper for yourself then just click on the link below:
Artificial sweeteners induce glucose intolerance by altering the gut microbiota.
Suez J, Korem T, Zeevi D, Zilberman-Schapira G, Thaiss CA, Maza O, Israeli D, Zmora N, Gilad S, Weinberger A, Kuperman Y, Harmelin A, Kolodkin-Gal I, Shapiro H, Halpern Z, Segal E, Elinav E.
Nature. 2014 Oct 9;514(7521):181-6. doi: 10.1038/nature13793. Epub 2014 Sep 17.

Microbiome News – ‘Are you having fish and chips tonight?’

Last month Schulz et al. from the Technical University at Munich published a new study in Nature showing that a high-fat diet dysregulates the host microbiome and promotes carcinogenesis in mice. Introduction of a high-fat diet to mice bearing an oncogenic form of the small GTPase, Ras, (K-rasG12Dint) was found to promote tumour progression in the small intestine of this genetically susceptible strain.  Interestingly, this was found to be independent of obesity. The high fat diet in combination with the mutation in the small G-protein resulted in dysbiosis of gut flora, and the authors propose a mechanism involving a decrease in Paneth-cell-mediated antimicrobial host defense and decreased dendritic cell recruitment and function in gut-associated lymphoid tissues.  Next the investigators crossed mice bearing the K-Ras mutation with MyD88 knockout mice.  MyD88 is an inflammatory signalling adaptor protein and its function includes activation of signalling in response to detection of microbial products by Toll-like receptors.  Blockade of inflammatory signals as a result of MyD88 deficiency was found to prevent the combined effects of the K-Ras mutation and high fat diet on tumourigenesis, consistent with dysbiosis of microbiota being a causal factor.  Interestingly, the disease was transmissible:  transfer of faeces from K-Ras mutant mice on a high fat diet to K-Ras mutant mice receiving a normal diet was sufficient to promote tumourigenesis. Antibiotic treatment of the mice was also found to reverse the effects of the high-fat diet.  So, unfortunately, we have yet another worry before we tuck into our next cod and chips!

The good news is that if you are involved in microbiome research we offer kits with tried and tested performance for purification of DNA and RNA from faecal samples, soil, and water. The kits from MO BIO have excellent performance in purification of high purity DNA/RNA samples with PCR-inhibitor removal and are compatible with high throughput sequencing, qRT-PCR, and other techniques.

The MO BIO PowerFecal® and PowerSoil® DNA Isolation Kits have been used in the Human Microbiome Project and are great to get you going.  If you need to scale up your number of isolations, we offer the PowerSoil® kit in a 96-well silica spin plate and magnetic bead-based format.  These kits are functionally equivalent to the PowerFecal® kit (both can be used on stool with equal success) so we recommend that you use the PowerSoil® kit if you plan to scale up in the future.

For RNA we recommend the PowerMicrobiome™ RNA kit.  If you omit the on-column DNase-step you can use it to purify RNA and DNA at the same time.  We also offer a PowerMag® Microbiome RNA/DNA kit specifically for high throughput purification of RNA and DNA using magnetic bead technology.

You can use the URL below for the link…

High-fat-diet-mediated dysbiosis promotes intestinal carcinogenesis independently of obesity.

Schulz MD, Atay C, Heringer J, Romrig FK, Schwitalla S, Aydin B, Ziegler PK, Varga J, Reindl W, Pommerenke C, Salinas-Riester G, Böck A, Alpert C, Blaut M, Polson SC, Brandl L, Kirchner T, Greten FR, Polson SW, Arkan MC.

Nature. 2014 Aug 31. doi: 10.1038/nature13398. [Epub ahead of print]

Thank you and goodnight

It transpires my previous blog is the last of the series. At the risk of sounding a bit ‘music award’ I just have some important thank yous (well only two, but they are important). Firstly, to Cambio, who have been absolutely great to me over the last year or so.  As I transitioned from PhD student to research fellow, the support offered to me by the company has been second to none. I will no doubt catch up with the guys over a beer or two at many conferences to come. Secondly (and lastly), to all the people who have read my blogs, tweeted them, re-tweeted them, and shared them on Facebook. I hope some of my rambles were of some practical use and if nothing else entertaining. Science is facing tough times with funding cuts and arguably more PhD graduates than ever before.

Science is great. We do it because we love it. But it’s competitive and if you’re not moving forward you’re moving backward. There are highs and lows. At times, major lows. We might not agree with the system but it’s a system we have to work in. Through everything have faith in yourself, work hard, and enjoy life.

If you ever see me at a conference please introduce yourself, the best thing about science is meeting awesome people!

I wish you all the best of luck with everything.

Thank you all again,

Chris

What is ‘Omics’? Part 2

Metabolomics

MetaboLomics is often confused with metaboNomics, probably because they mean exactly the same thing. They were both terms coined around a similar time, so pick your favourite and run with it. Metabolomics is probably more common and is usually used in conjunction with liquid chromatography mass spectrometry (LCMS) experiments, whereas metabonomics is usually used in NMR based studies. The technology has existed for some time but has gained significant momentum in recent years, coupled with higher resolution mass spectrometers. If you believe everything Wikipedia tells you (who doesn’t?) metabolomics is the study of all metabolites in a biological cell, tissue, organ or organism, which are the end products of cellular processes. So essentially it is the study of small molecules, typically less than 1500Da, which are produced by cells.

There are broadly two ways to obtain metabolomics data; by NMR or mass spectrometry (MS). NMR has been used more historically, but MS based approaches are increasingly common and will be the focus here. Bias is introduced with any single technique so it is advisable to use both techniques where possible, but of course this is not always practical and typically not a requirement for publication. The potential applications for metabolomics are expansive. It has been employed in single/co-culture, plant and crop, bio fluids, and organ based experiments. Not at the same time, of course. Like the boom in gut microbiome studies in all aspects of disease, I expect metabolomics to follow suit. In my primary research I have employed LCMS based metabolomics to explore the metabolites in stool, in the hope of increasing the understanding of cellular processes in the gut and developing potential biomarkers to aid in disease diagnosis. This type of work is not to be undertaken lightly and requires expansive optimisation. The analysis of the resulting data is also hugely complex and requires many user and computational hours to be invested. Following initial analysis it is also important to return to the samples and carry out MSn based targeted analysis on the compounds of interest. For absolute quantification and identification a standard of the compound should be run at various concentrations. As this technology grows, so too will methods with groups dedicating years to developing techniques, which everyone can benefit from. An example of this is passing a sample through a C18 column and HILIC column before injection into the MS separating both the hydrophobic and hydrophilic compounds, respectively.

Proteomics

Proteomics, the study of all proteins in any given sample, can be seen as the link between genomics and metabolomics. It is assumed that an increase/decrease in protein abundance reflects potentially important up-/down- regulation in response to a variable, for example disease or nutrient availability. There is an overwhelming amount of methods and techniques for proteomics, the choice of which should be dictated by the primary research question. Unlike genomics, gel based methods still provide good resolution and feature commonly in publications over the last decade. Two-dimensional (2D) gels separate proteins based on charge and size, where the proteins are visualised as spots. This is semi-quantitative as, all things even, the intensity of a spot reflects the relative abundance of the protein. SDS-PAGE gels are one-dimensional where proteins are separated electrophoretically according to size. Identification of spots or bands is usually necessary and involves excision from the gel and digestion, giving smaller peptide fragments. For multiple reasons, which won’t be discussed here, gel-based methods can be circumvented with samples digested in-solution into peptides and processed directly by liquid chromatography mass spectrometry (LCMS). Following acquisition of MS/MS data, peptides can identified using online databases, such as MASCOT, which maps the MS/MS fingerprint of peptides against genome sequence data to infer protein identity and cellular function. This overview is just the tip of the proteomics iceberg and I encourage anyone thinking of embarking in proteomics to understand the relative merits of all available methods.

As systems biology is increasingly applied to complex research questions, the field of omics will continue to expand and evolve. While these technologies become more accessible, with capacity to generate huge amounts of data, it is important researchers understand the technologies and implement them accordingly. Undertaking any omics experiment is NOT a quick way of amassing data, not when done properly anyway. To quote a leading metabolomics expert from the recent proteomics methods forum – “If you are thinking about doing metabolomics, don’t”. I wouldn’t go this far, my advice would be – “If you are thinking about doing any omics, do so with a hypothesis in mind, appreciate the need to develop and optimise methods, and understand analysis of data is hugely complex and time consuming”.

Let me know in the comments section how you perceive ‘omics’, how you are getting on in your research, and if you have any questions I might be able to address.

What is ‘Omics’? Part 1

Given the huge rise of ‘omics’ technologies in recent years, there is a good chance you have at least come across the term, even if you don’t understand what it encompasses. Love it or hate it, omics is essentially a term used to imply the study of the entirety of something. It is typically high throughput and a vast amount of data can be generated in a relatively small time frame. Typically omics technologies are suffixed by ‘ome’ – microbiome, metabolome, proteome, transcriptome to name but a few from the ever expanding list. There are many reasons these technologies are gaining profound interest from a vast array of scientific fields. They can complement just about any study providing data from DNA/RNA content and expression levels, through the protein regulation and ultimately to the functional compounds (termed metabolites) which are produced by cells. In this blog I will give a flavour of a few of these technologies, which will hopefully provide a foundation for understanding of the principal and potential.

Metagenomics

Metagenomics involves DNA and RNA based studies. There is a huge array of potential studies in this field of next generation sequencing (NGS) including 16S rRNA / ITS profiling, whole genome sequencing, and whole transcriptome sequencing. I am hasty to note that I use profiling to describe the use of universal microbial genes (e.g 16S rRNA for bacteria) and clarify this is not technically metagenomics, which is a shotgun approach and does not aim to sequence a single gene. Metagenomics will thus provide more information than simply who is there, and is not limited to specific domains, but a huge amount of data is needed. This means sequencing runs cannot be multiplexed to the same extent as in specific gene based profiling, raising the cost substantially. While the cost of sequencing continues to fall, one approach might be to use, for example, 16S rRNA profiling on all samples and deep metagenomic sequencing on the most informative 10%. Important and often difficult enrichment of samples should also be considered when carrying out metagenomics. NGS profiling overcomes this due to the use of specific primers, but noteworthy is that this amplicon based approach is still subject to inherent PCR bias. I find a lot of published NGS profiling data which I am sceptical about. This relates primarily to over simplifying highly complex data, such as knowing each patient differs hugely but amalgamating data into pie charts to mask the variability in an attempt to show a simplified figure. The other end of the spectrum involved using figures which are hugely complex and offer little information, or require readers to invest too much time in what feels like deciphering the Da Vinci code, and often are the result of following an R script or similar. I myself am guilty of this over-elaborate figure production, but have refrained from taking them to publication.

A couple of important notes about metagenomics before you plough in head first, only to be so snowed under by data you won’t see the light for many a year. While acquiring data is arguably easier than ever, the opposite is probably true for the processing. The handling of NGS data really needs a bioinformatician. Trust me; I have spent too many sleepless nights self-teaching myself the methods of processing and analysis. Briefly, while on the subject of handling the data, a couple of good pipelines exist. I like pipelines but am almost ashamed to say so as any hardcore bioinformaticians out there will probably have head in hands at the thought. Still, I appreciate them for what they are, which is a tool allowing easy access to a range of commands to take raw data to publication quality figures. The two most common pipelines are Mothur which can be run on all operating systems (even windows, which for the newbie is a bonus) and QIIME which can be ran on MAC and Linux (and via virtual box on windows). Both have hugely useful online tutorials and workflows, as well as dedicated support forums. Another point of consideration is the still relatively short read lengths when doing NGS profiling. This means that analysis can only typically go to genus level, which from a microbial ecology point of view is not ideal given the huge variation of functions exerted between strains of the same species. To demonstrate, E. coli 0157 (nasty) and E. coli K12 (lab strain) have the same 16S rRNA gene sequence. Thus, it is important not to forget the power of traditional techniques, such as culturing an organism. I like to use 16S profiling to guide my research and understand more about the ecology in preterm disease. But I also take this information further. For example, guided by my NGS data, I have cultured E. coli from preterm stool that was dominated in both healthy and diseased infants. Doing so has revealed some interesting results – all babies have unique strains of the bacteria, but the E. coli cultured from diseased babies are comparable in some respects, such as the same antibiotic sensitivity. In a typical circular fashion, the next phase may involve whole genome sequencing the isolates to detect shared relevant genes between the E. coli in diseased babies. Other ‘omics’ technologies are also being implemented in this complex research question, going beyond simply what bacteria are present and explore the mechanisms and functional potential. This work will involve metabolomics and proteomics and these technologies will be discussed in part 2.

I hope this has given a basic description of metagenomics. There is a wealth of information available on the internet and please leave additional comments or related questions in the discussion. If you are reading this and thinking “there is no chance my PI will invest in omics” – with the current downward projection of sequencing costs, I like to think that metagenomics of huge cohorts will be achievable, even to the little guys.

The DOs and DON’Ts of science poster presentations

When done well a scientific poster can be a hugely informative and useful research dissemination tool. However, despite a wealth of poster advice tips available on the internet and in books, posters presentations are typically substandard. Here I present a view DOs and DON’Ts of scientific poster presentations from poster design to presentation. These are my personal views based on the 4 years I have attended conferences, I am sure there will be many more suggestions which I encourage as comments.

DO

  • Look at the poster guidelines for the specific conference.  It is a good idea to re-use posters if possible, but often poster boards are specific sizes so check any existing posters are suitable
  • Include a sufficiently concise title and use the largest font size for the title (for A1 I like a minimum font size of 60)
  • Have clear subheadings on each section (Introduction, Methodology, etc)
  • Keep text as minimal as possible.  It is easy to include lots of information, what requires much greater skill is condensing the appropriate information into a succinct section
  • Where possible, use images in place of text.  A picture says a thousand words
  • Ensure the font size of all text (including text embedded in images!) is large enough to be read at an appropriate distance, such as 6 ft. (for A1 I like a minimum text size of 28)
  • Don’t overload the poster and leave lots of white space
  • Use colour
  • Be consistent with the use of punctuation, especially full stops at the end of bullet points (I would say they are not needed)
  • Include your contact details, especially an email address
  • Tweet details of your poster and presentation data and time with the conference hashtag
  • Looking engaging when standing next your poster
  • If interest in your poster is low, talk to other presenters around you and invite them to ask you about your research 
  • Ask interested parties about their work.  Sometimes it is not clear in passing how relevant a poster is to research interests, but once you have a chat you can often find parallels and who knows, maybe even collaborations!

 

DON’T

  • Leave designing the poster until the last minute
  • Include an abstract on the poster (unless specifically requested).  This will be available through the abstract book and takes up space unnecessarily
  • Use lots of references.  Again this can take up considerable space.  I would say 10 max but ideally close to 5.
  • Play on your phone while standing next to your poster.  Facebook can wait
  • Talk negatively about your results
  • Don’t leave your  poster for any length of time during your session.  It’s very annoying when you specifically go to a poster when the presenter should be there and they never bother to show up