This is a quick and dirty (i.e. no adaptor/quality trimming) assessment of all of our Crassostrea irginica bisulfite sequencing efforts to date in order to get a rough idea of the methylation mapping, per this GitHub issue. Ran Bismark on Mox on a series of subset of the reads:
- 100k
- 500k
- 1M
- 2M
- 5M
- 10M
This was run as both paired end and single end, as the paired end analysis didn’t actually produce a mapping efficiency (but does have alignment percentages), but the single end ended up yielding mapping efficiencies. Single end analysis consisted of just the R1 reads.
SBATCH scripts are linked and pasted below:
Paired end script:
#!/bin/bash
## Job Name
#SBATCH --job-name=bismark
## Allocation Definition
#SBATCH --account=coenv
#SBATCH --partition=coenv
## Resources
## Nodes
#SBATCH --nodes=1
## Walltime (days-hours:minutes:seconds format)
#SBATCH --time=10-00:00:00
## Memory per node
#SBATCH --mem=120G
##turn on e-mail notification
#SBATCH --mail-type=ALL
#SBATCH --mail-user=samwhite@uw.edu
## Specify the working directory for this job
#SBATCH --workdir=/gscratch/scrubbed/samwhite/outputs/20190222_cvirginica_pe_bismark
# Load Python Mox module for Python module availability
module load intel-python3_2017
# Document programs in PATH (primarily for program version ID)
date >> system_path.log
echo "" >> system_path.log
echo "System PATH for $SLURM_JOB_ID" >> system_path.log
echo "" >> system_path.log
printf "%0.s-" {1..10} >> system_path.log
echo ${PATH} | tr : \\n >> system_path.log
# Directories and programs
wd=$(pwd)
bismark_dir="/gscratch/srlab/programs/Bismark-0.19.0"
bowtie2_dir="/gscratch/srlab/programs/bowtie2-2.3.4.1-linux-x86_64/"
samtools="/gscratch/srlab/programs/samtools-1.9/samtools"
reads_dir="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/"
## genomes
genome="/gscratch/srlab/sam/data/C_virginica/genomes/"
## An array of subsets of reads to use in bismark:
## 100k, 500k, 1M, 2M, 5M, 10M
subset_array=(100000 500000 1000000 2000000 5000000 10000000)
## FastQ Files
R1="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/cvir_bsseq_all_pe_R1.fastq.gz"
R2="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/cvir_bsseq_all_pe_R2.fastq.gz"
## FastQ files lists
R1_list="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/cvir_bsseq_pe_all_R1.list"
R2_list="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/cvir_bsseq_pe_all_R2.list"
# Check for existence of previous concatenation
# If they exist, delete them
for file in ${R1} ${R1_list} ${R2} ${R2_list}
do
if [ -e ${file} ]; then
rm ${file}
fi
done
# Concatenate R1 reads and generate lists of FastQs
for fastq in ${reads_dir}*R1*.gz
do
echo ${fastq} >> ${R1_list}
cat ${fastq} >> ${R1}
done
# Concatenate R2 reads and generate lists of FastQs
for fastq in ${reads_dir}*R2*.gz
do
echo ${fastq} >> ${R2_list}
cat ${fastq} >> ${R2}
done
# Run bismark using bisulftie-converted genome
for subset in ${subset_array[@]}
do
mkdir subset_${subset}
cd subset_${subset}
${bismark_dir}/bismark \
--path_to_bowtie ${bowtie2_dir} \
--genome ${genome} \
--score_min L,0,-0.6 \
-u ${subset} \
-p 28 \
-1 ${R1} \
-2 ${R2} \
2> subset_${subset}_summary.txt
# Deduplicate bam files
${bismark_dir}/deduplicate_bismark \
--bam \
--single \
*.bam
# Methylation extraction
${bismark_dir}/bismark_methylation_extractor \
--bedgraph \
--counts \
--scaffolds \
--remove_spaces \
--multicore 28 \
--buffer_size 75% \
*deduplicated.bam
# Bismark processing report
${bismark_dir}/bismark2report
#Bismark summary report
${bismark_dir}/bismark2summary
# Sort files for methylkit and IGV
find *deduplicated.bam \
| xargs basename -s .bam \
| xargs -I{} ${samtools} \
sort \
--threads 28 \
{}.bam \
-o {}.sorted.bam
# Index sorted files for IGV
# The "-@ 56" below specifies number of CPU threads to use.
find *.sorted.bam \
| xargs basename -s .sorted.bam \
| xargs -I{} ${samtools} \
index -@ 28 \
{}.sorted.bam
cd ${wd}
done
Single end script:
#!/bin/bash
## Job Name
#SBATCH --job-name=bismark
## Allocation Definition
#SBATCH --account=coenv
#SBATCH --partition=coenv
## Resources
## Nodes
#SBATCH --nodes=1
## Walltime (days-hours:minutes:seconds format)
#SBATCH --time=10-00:00:00
## Memory per node
#SBATCH --mem=120G
##turn on e-mail notification
#SBATCH --mail-type=ALL
#SBATCH --mail-user=samwhite@uw.edu
## Specify the working directory for this job
#SBATCH --workdir=/gscratch/scrubbed/samwhite/outputs/20190222_cvirginica_se_bismark
# Load Python Mox module for Python module availability
module load intel-python3_2017
# Document programs in PATH (primarily for program version ID)
date >> system_path.log
echo "" >> system_path.log
echo "System PATH for $SLURM_JOB_ID" >> system_path.log
echo "" >> system_path.log
printf "%0.s-" {1..10} >> system_path.log
echo ${PATH} | tr : \\n >> system_path.log
# Directories and programs
wd=$(pwd)
bismark_dir="/gscratch/srlab/programs/Bismark-0.19.0"
bowtie2_dir="/gscratch/srlab/programs/bowtie2-2.3.4.1-linux-x86_64/"
samtools="/gscratch/srlab/programs/samtools-1.9/samtools"
reads_dir="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/"
## genomes
genome="/gscratch/srlab/sam/data/C_virginica/genomes/"
## An array of subsets of reads to use in bismark:
## 100k, 500k, 1M, 2M, 5M, 10M
subset_array=(100000 500000 1000000 2000000 5000000 10000000)
## FastQ Files
se_reads="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/cvir_bsseq_all_R1.fastq.gz"
## FastQ files lists
R1_list="/gscratch/scrubbed/samwhite/data/C_virginica/BSseq/cvir_bsseq_all_R1.list"
# Check for existence of previous concatenation
# If they exist, delete them
for file in ${se_reads} ${R1_list}
do
if [ -e ${file} ]; then
rm ${file}
fi
done
# Concatenate R1 reads and generate lists of FastQs
for fastq in ${reads_dir}*R1*.gz
do
echo ${fastq} >> ${R1_list}
cat ${fastq} >> ${se_reads}
done
# Run bismark using bisulftie-converted genome
for subset in ${subset_array[@]}
do
mkdir subset_${subset}
cd subset_${subset}
${bismark_dir}/bismark \
--path_to_bowtie ${bowtie2_dir} \
--genome ${genome} \
--score_min L,0,-0.6 \
-u ${subset} \
-p 28 \
${se_reads} \
2> subset_${subset}_summary.txt
# Deduplicate bam files
${bismark_dir}/deduplicate_bismark \
--bam \
--single \
*.bam
# Methylation extraction
${bismark_dir}/bismark_methylation_extractor \
--bedgraph \
--counts \
--scaffolds \
--remove_spaces \
--multicore 28 \
--buffer_size 75% \
*deduplicated.bam
# Bismark processing report
${bismark_dir}/bismark2report
#Bismark summary report
${bismark_dir}/bismark2summary
# Sort files for methylkit and IGV
find *deduplicated.bam \
| xargs basename -s .bam \
| xargs -I{} ${samtools} \
sort \
--threads 28 \
{}.bam \
-o {}.sorted.bam
# Index sorted files for IGV
# The "-@ 56" below specifies number of CPU threads to use.
find *.sorted.bam \
| xargs basename -s .sorted.bam \
| xargs -I{} ${samtools} \
index -@ 28 \
{}.sorted.bam
cd ${wd}
done
RESULTS
Output folder:
Bedgraphs for each subset:
- There is a bedgraph for each single-end subset. I will not link them all. Here’s an example for the 10,000,000 subset:
Code used to pull mapping efficiencies:
find ./*_cvirginica_* -name "subset*.txt" -print0 | xargs -0 grep "Mapping efficiency"
./20190222_cvirginica_se_bismark/subset_500000/subset_500000_summary.txt:Mapping efficiency: 18.6%
./20190222_cvirginica_se_bismark/subset_10000000/subset_10000000_summary.txt:Mapping efficiency: 16.1%
./20190222_cvirginica_se_bismark/subset_1000000/subset_1000000_summary.txt:Mapping efficiency: 19.2%
./20190222_cvirginica_se_bismark/subset_5000000/subset_5000000_summary.txt:Mapping efficiency: 18.6%
./20190222_cvirginica_se_bismark/subset_100000/subset_100000_summary.txt:Mapping efficiency: 18.5%
./20190222_cvirginica_se_bismark/subset_2000000/subset_2000000_summary.txt:Mapping efficiency: 19.5%
Reads Subset | Mapping Efficiency (%) |
---|---|
100000 | 18.5 |
500000 | 18.6 |
1000000 | 19.2 |
2000000 | 19.5 |
5000000 | 18.6 |
10000000 | 16.1 |
Read Type (PE or SE) | Reads Subset | C methylated in CpG context (%) |
---|---|---|
pe | 100000 | 73.3 |
pe | 500000 | 73.6 |
pe | 1000000 | 73.0 |
pe | 2000000 | 72.5 |
pe | 5000000 | 68.9 |
pe | 10000000 | 60.6 |
se | 100000 | 75.7 |
se | 500000 | 75.7 |
se | 1000000 | 75.5 |
se | 2000000 | 75.5 |
se | 5000000 | 72.0 |
se | 10000000 | 75.4 |