Print first five characters.
perl -wl -e '@f=; for $i (0 .. $#f) {chomp $f[$i]; $s=substr($f[$i],0,6);print $s} '
Print first five characters.
perl -wl -e '@f=; for $i (0 .. $#f) {chomp $f[$i]; $s=substr($f[$i],0,6);print $s} '
I cannot help but quoting this great excuse for my not doing BQSR. I do not have definitive dbSNP for parasites.
Rocketknight wrote:
http://seqanswers.com/forums/showthread.php?t=19069
- You should definitely not run base quality score recalibration without a dbSNP reference. BQSR works as follows:
- 1) Run through all mapped reads looking for reads mismatching the reference genome at a position not listed in dbSNP. (GATK assumes that mismatches that occur in dbSNP are real variations that are being sequenced correctly, and mismatches that aren’t in dbSNP are sequencing errors. This is a decent approximation for statistics like these.)
- 2) Compute statistics on where these mismatches occur (i.e. do they occur near the ends of reads, do they occur in certain dinucleotide pairs, do they occur on bases with low quality scores, etc.)
- 3) Using the statistics gathered above, the quality scores for all bases in your reads are rewritten with new empirical quality scores. To give an example, take the set of all CT dinucleotides with quality 20 at read position 10. A quality score of 20 indicates a 1/100 chance of error. But let’s say these dinucleotides actually only mismatch the reference at a rate of 1/500. This would mean their ’empirical’ score is a higher value than the quality score reported by the machine, and so the quality score for those bases is overwritten with a higher score (27, in this example).
- Basically, this process serves to eliminate systemic biases in quality score assignment from sequencers. It’s quite helpful, but it’s absolutely dependent on having an accurate database of polymorphisms for your organism. If you don’t have that database, GATK can’t tell the difference between polymorphisms and sequencing errors, and so it’ll assume all mismatches are sequencing errors, which will cause it to assign incredibly low quality scores to all your bases. This will ruin all downstream analysis. Don’t do it!
I will illustrate how to use it, if I can …
usage: freebayes [OPTION] ... [BAM FILE] ... Bayesian small polymorphism discovery. parameters: -h --help Prints this help dialog. input and output: -b --bam FILE Add FILE to the set of BAM files to be analyzed. -c --stdin Read BAM input on stdin. -v --vcf FILE Output VCF-format results to FILE. -f --fasta-reference FILE Use FILE as the reference sequence for analysis. An index file (FILE.fai) will be created if none exists. If neither --targets nor --region are specified, FreeBayes will analyze every position in this reference. -t --targets FILE Limit analysis to targets listed in the BED-format FILE. -r --region :.. Limit analysis to the specified region, 0-base coordinates, end_position not included (same as BED format). -s --samples FILE Limit analysis to samples listed (one per line) in the FILE. By default FreeBayes will analyze all samples in its input BAM files. --populations FILE Each line of FILE should list a sample and a population which it is part of. The population-based bayesian inference model will then be partitioned on the basis of the populations. -A --cnv-map FILE Read a copy number map from the BED file FILE, which has the format: reference sequence, start, end, sample name, copy number ... for each region in each sample which does not have the default copy number as set by --ploidy. -L --trace FILE Output an algorithmic trace to FILE. --failed-alleles FILE Write a BED file of the analyzed positions which do not pass --pvar to FILE. -@ --variant-input VCF Use variants reported in VCF file as input to the algorithm. Variants in this file will be treated as putative variants even if there is not enough support in the data to pass input filters. -l --only-use-input-alleles Only provide variant calls and genotype likelihoods for sites and alleles which are provided in the VCF input, and provide output in the VCF for all input alleles, not just those which have support in the data. --haplotype-basis-alleles VCF When specified, only variant alleles provided in this input VCF will be used for the construction of complex or haplotype alleles. reporting: -P --pvar N Report sites if the probability that there is a polymorphism at the site is greater than N. default: 0.0001 -_ --show-reference-repeats Calculate and show information about reference repeats in the VCF output. population model: -T --theta N The expected mutation rate or pairwise nucleotide diversity among the population under analysis. This serves as the single parameter to the Ewens Sampling Formula prior model default: 0.001 -p --ploidy N Sets the default ploidy for the analysis to N. default: 2 -J --pooled Assume that samples result from pooled sequencing. When using this flag, set --ploidy to the number of alleles in each sample. reference allele: -Z --use-reference-allele This flag includes the reference allele in the analysis as if it is another sample from the same population. -H --diploid-reference If using the reference sequence as a sample (-Z), treat it as diploid. default: false (reference is haploid) --reference-quality MQ,BQ Assign mapping quality of MQ to the reference allele at each site and base quality of BQ. default: 100,60 allele scope: -I --no-snps Ignore SNP alleles. -i --no-indels Ignore insertion and deletion alleles. -X --no-mnps Ignore multi-nuceotide polymorphisms, MNPs. -u --no-complex Ignore complex events (composites of other classes). -n --use-best-n-alleles N Evaluate only the best N SNP alleles, ranked by sum of supporting quality scores. (Set to 0 to use all; default: all) -E --max-complex-gap N Allow complex alleles with contiguous embedded matches of up to this length. indel realignment: -O --left-align-indels Left-realign and merge gaps embedded in reads. default: false input filters: -4 --use-duplicate-reads Include duplicate-marked alignments in the analysis. default: exclude duplicates -m --min-mapping-quality Q Exclude alignments from analysis if they have a mapping quality less than Q. default: 30 -q --min-base-quality Q Exclude alleles from analysis if their supporting base quality is less than Q. default: 20 -R --min-supporting-quality MQ,BQ In order to consider an alternate allele, at least one supporting alignment must have mapping quality MQ, and one supporting allele must have base quality BQ. default: 0,0, unset -Q --mismatch-base-quality-threshold Q Count mismatches toward --read-mismatch-limit if the base quality of the mismatch is >= Q. default: 10 -U --read-mismatch-limit N Exclude reads with more than N mismatches where each mismatch has base quality >= mismatch-base-quality-threshold. default: ~unbounded -z --read-max-mismatch-fraction N Exclude reads with more than N [0,1] fraction of mismatches where each mismatch has base quality >= mismatch-base-quality-threshold default: 1.0 -$ --read-snp-limit N Exclude reads with more than N base mismatches, ignoring gaps with quality >= mismatch-base-quality-threshold. default: ~unbounded -e --read-indel-limit N Exclude reads with more than N separate gaps. default: ~unbounded -0 --no-filters Do not use any input base and mapping quality filters Equivalent to -m 0 -q 0 -R 0 -S 0 -x --indel-exclusion-window Ignore portions of alignments this many bases from a putative insertion or deletion allele. default: 0 -F --min-alternate-fraction N Require at least this fraction of observations supporting an alternate allele within a single individual in the in order to evaluate the position. default: 0.0 -C --min-alternate-count N Require at least this count of observations supporting an alternate allele within a single individual in order to evaluate the position. default: 1 -3 --min-alternate-qsum N Require at least this sum of quality of observations supporting an alternate allele within a single individual in order to evaluate the position. default: 0 -G --min-alternate-total N Require at least this count of observations supporting an alternate allele within the total population in order to use the allele in analysis. default: 1 -! --min-coverage N Require at least this coverage to process a site. default: 0 bayesian priors: -Y --no-ewens-priors Turns off the Ewens' Sampling Formula component of the priors. -k --no-population-priors Equivalent to --pooled --no-ewens-priors -w --hwe-priors Use the probability of the combination arising under HWE given the allele frequency as estimated by observation frequency. observation prior expectations: -V --binomial-obs-priors Incorporate expectations about osbervations into the priors, Uses read placement probability, strand balance probability, and read position (5'-3') probability. -a --allele-balance-priors Use aggregate probability of observation balance between alleles as a component of the priors. Best for observations with minimal inherent reference bias. algorithmic features: -M --site-selection-max-iterations N Uses hill-climbing algorithm to search posterior space for N iterations to determine if the site should be evaluated. Set to 0 to prevent use of this algorithm for site selection, and to a low integer for improvide site selection at a slight performance penalty. default: 5. -B --genotyping-max-iterations N Iterate no more than N times during genotyping step. default: 25. --genotyping-max-banddepth N Integrate no deeper than the Nth best genotype by likelihood when genotyping. default: 6. -W --posterior-integration-limits N,M Integrate all genotype combinations in our posterior space which include no more than N samples with their Mth best data likelihood. default: 1,3. -K --no-permute Do not scale prior probability of genotype combination given allele frequency by the number of permutations of included genotypes. -N --exclude-unobserved-genotypes Skip sample genotypings for which the sample has no supporting reads. -S --genotype-variant-threshold N Limit posterior integration to samples where the second-best genotype likelihood is no more than log(N) from the highest genotype likelihood for the sample. default: ~unbounded -j --use-mapping-quality Use mapping quality of alleles when calculating data likelihoods. -D --read-dependence-factor N Incorporate non-independence of reads by scaling successive observations by this factor during data likelihood calculations. default: 0.9 -= --no-marginals Do not calculate the marginal probability of genotypes. Saves time and improves scaling performance in large populations. debugging: -d --debug Print debugging output. -dd Print more verbose debugging output (requires "make DEBUG") author: Erik Garrison , Marth Lab, Boston College, 2010, 2011 date: 2012-04-27 version: 0.9.5
This deletes lines with >, SQ, or // .
perl -i -ne 'next if m,^\>|^SQ|\/\/,; print' *nof.embl
I am just starting to use GATK: it becomes much better in past years with great examples and explanation. Many tools may be great but most of them do not provide supports. (Maintaining programs is no fun.)
java -Xmx2g -jar ~/hi1/GenomeAnalysisTK.jar --------------------------------------------------------------------------------- The Genome Analysis Toolkit (GATK) v1.5-32-g2761da9, Compiled 2012/04/23 16:48:19 Copyright (c) 2010 The Broad Institute Please view our documentation at http://www.broadinstitute.org/gsa/wiki For support, please view our support site at http://getsatisfaction.com/gsa --------------------------------------------------------------------------------- --------------------------------------------------------------------------------- usage: java -jar GenomeAnalysisTK.jar -T [-args ] [-I ] [-rbs ] [-et ] [-K ] [-rf ] [-L ] [-XL ] [-isr ] [-im ] [-R ] [-ndrs] [-dt ] [-dfrac ] [-dcov ] [-baq ] [-baqGOP ] [-PF ] [-OQ] [-BQSR ] [-DBQ ] [-S ] [-U ] [-nt ] [-bfh ] [-rgbl ] [-ped ] [-pedString ] [-pedValidationType ] [-l ] [-log ] [-h] -T,--analysis_type Type of analysis to run -args,--arg_file Reads arguments from the specified file -I,--input_file SAM or BAM file(s) -rbs,--read_buffer_size Number of reads per SAM file to buffer in memory -et,--phone_home What kind of GATK run report should we generate? STANDARD is the default, can be NO_ET so nothing is posted to the run repository. Please see broadinstitute.org/gsa/wiki/index.php/Phone_home for details. (NO_ET|STANDARD|STDOUT) -K,--gatk_key GATK Key file. Required if running with -et NO_ET. Please see broadinstitute.org/gsa/wiki/index.php/Phone_home for details. -rf,--read_filter Specify filtration criteria to apply to each read individually -L,--intervals One or more genomic intervals over which to operate. Can be explicitly specified on the command line or in a file (including a rod file) -XL,--excludeIntervals One or more genomic intervals to exclude from processing. Can be explicitly specified on the command line or in a file (including a rod file) -isr,--interval_set_rule Indicates the set merging approach the interval parser should use to combine the various -L or -XL inputs (UNION|INTERSECTION) -im,--interval_merging Indicates the interval merging rule we should use for abutting intervals (ALL| OVERLAPPING_ONLY) -R,--reference_sequence Reference sequence file -ndrs,--nonDeterministicRandomSeed Makes the GATK behave non deterministically, that is, the random numbers generated will be different in every run -dt,--downsampling_type Type of reads downsampling to employ at a given locus. Reads will be selected randomly to be removed from the pile based on the method described here (NONE|ALL_READS|BY_SAMPLE) -dfrac,--downsample_to_fraction Fraction [0.0-1.0] of reads to downsample to -dcov,--downsample_to_coverage Coverage [integer] to downsample to at any given locus; note that downsampled reads are randomly selected from all possible reads at a locus -baq,--baq Type of BAQ calculation to apply in the engine (OFF|CALCULATE_AS_NECESSARY|RECALCULATE) -baqGOP,--baqGapOpenPenalty BAQ gap open penalty (Phred Scaled). Default value is 40. 30 is perhaps better for whole genome call sets -PF,--performanceLog If provided, a GATK runtime performance log will be written to this file -OQ,--useOriginalQualities If set, use the original base quality scores from the OQ tag when present instead of the standard scores -BQSR,--BQSR Filename for the input covariates table recalibration .csv file which enables on the fly base quality score recalibration -DBQ,--defaultBaseQualities If reads are missing some or all base quality scores, this value will be used for all base quality scores -S,--validation_strictness How strict should we be with validation (STRICT| LENIENT|SILENT) -U,--unsafe If set, enables unsafe operations: nothing will be checked at runtime. For expert users only who know what they are doing. We do not support usage of this argument. (ALLOW_UNINDEXED_BAM| ALLOW_UNSET_BAM_SORT_ORDER| NO_READ_ORDER_VERIFICATION| ALLOW_SEQ_DICT_INCOMPATIBILITY|ALL) -nt,--num_threads How many threads should be allocated to running this analysis. -bfh,--num_bam_file_handles The total number of BAM file handles to keep open simultaneously -rgbl,--read_group_black_list Filters out read groups matching : or a .txt file containing the filter strings one per line. -ped,--pedigree Pedigree files for samples -pedString,--pedigreeString Pedigree string for samples -pedValidationType,--pedigreeValidationType How strict should we be in validating the pedigree information? (STRICT|SILENT) -l,--logging_level Set the minimum level of logging, i.e. setting INFO get's you INFO up to FATAL, setting ERROR gets you ERROR and FATAL level logging. -log,--log_to_file Set the logging location -h,--help Generate this help message alignment Analyses used to validate the correctness and performance the BWA Java bindings. Align Aligns reads to a given reference using Heng Li's BWA aligner, presenting the resulting alignments in SAM or BAM format. AlignmentValidation Validates consistency of the aligner interface by taking reads already aligned by BWA in a BAM file, stripping them of their alignment data, realigning them, and making sure one of the best resulting realignments matches the original alignment from the input file. CountBestAlignments Counts the number of best alignments as presented by BWA and outputs a histogram of number of placements vs. annotator VariantAnnotator Annotates variant calls with context information. beagle BeagleOutputToVCF Takes files produced by Beagle imputation engine and creates a vcf with modified annotations. ProduceBeagleInput Converts the input VCF into a format accepted by the Beagle imputation/analysis program. VariantsToBeagleUnphased Produces an input file to Beagle imputation engine, listing unphased, hard-called genotypes for a single sample in input variant file. coverage CallableLoci Emits a data file containing information about callable, uncallable, poorly mapped, and other parts of the genome CompareCallableLoci Test routine for new VariantContext object DepthOfCoverage Toolbox for assessing sequence coverage by a wide array of metrics, partitioned by sample, read group, or library GCContentByInterval Walks along reference and calculates the GC content for each interval. diagnostics ErrorRatePerCycle Computes the read error rate per position in read (in the original 5'->3' orientation that the read had coming off the machine) Emits a GATKReport containing readgroup, cycle, mismatches, counts, qual, and error rate for each read group in the input BAMs FOR ONLY THE FIRST OF PAIR READS. ReadGroupProperties Emits a GATKReport containing read group, sample, library, platform, center, sequencing data, paired end status, simple read type name (e.g. ReadLengthDistribution Outputs the read lengths of all the reads in a file. diffengine DiffObjects A generic engine for comparing tree-structured objects examples CoverageBySample Computes the coverage per sample. GATKPaperGenotyper A simple Bayesian genotyper, that outputs a text based call format. fasta FastaAlternateReferenceMaker Generates an alternative reference sequence over the specified interval. FastaReferenceMaker Renders a new reference in FASTA format consisting of only those loci provided in the input data set. FastaStats Calculates basic statistics about the reference sequence itself filters VariantFiltration Filters variant calls using a number of user-selectable, parameterizable criteria. genotyper UGBoundAF Created by IntelliJ IDEA. UnifiedGenotyper A variant caller which unifies the approaches of several disparate callers -- Works for single-sample and multi-sample data. indels IndelRealigner Performs local realignment of reads based on misalignments due to the presence of indels. LeftAlignIndels Left-aligns indels from reads in a bam file. RealignerTargetCreator Emits intervals for the Local Indel Realigner to target for realignment. SomaticIndelDetector Tool for calling indels in Tumor-Normal paired sample mode; this tool supports single-sample mode as well, but this latter functionality is now superceded by UnifiedGenotyper. phasing PhaseByTransmission Computes the most likely genotype combination and phases trios and parent/child pairs ReadBackedPhasing Walks along all variant ROD loci, caching a user-defined window of VariantContext sites, and then finishes phasing them when they go out of range (using upstream and downstream reads). qc CountIntervals Counts the number of contiguous regions the walker traverses over. CountLoci Walks over the input data set, calculating the total number of covered loci for diagnostic purposes. CountMales Walks over the input data set, calculating the number of reads seen for diagnostic purposes. CountReads Walks over the input data set, calculating the number of reads seen for diagnostic purposes. CountRODs Prints out counts of the number of reference ordered data objects encountered. CountRODsByRef Prints out counts of the number of reference ordered data objects encountered. CycleQuality Walks over the input data set, calculating the number of reads seen for diagnostic purposes. ErrorThrowing a walker that simply throws errors. PrintLocusContext At each locus in the input data set, prints the reference base, genomic location, and all aligning reads in a compact but human-readable form. QCRef Prints out counts of the number of reference ordered data objects encountered. ReadClippingStats Walks over the input reads, printing out statistics about the read length, number of clipping events, and length of the clipping to the output stream. ReadValidation Checks all reads passed through the system to ensure that the same read is not passed to the walker multiple consecutive times. RodSystemValidation a walker for validating (in the style of validating pile-up) the ROD system. ValidatingPileup At every locus in the input set, compares the pileup data (reference base, aligned base from each overlapping read, and quality score) to the reference pileup data generated by samtools. recalibration CountCovariates First pass of the base quality score recalibration -- Generates recalibration table based on various user-specified covariates (such as reported quality score, cycle, and dinucleotide). TableRecalibration Second pass of the base quality score recalibration -- Uses the table generated by CountCovariates to update the base quality scores of the input bam file using a sequential table calculation making the base quality scores more accurately reflect the actual quality of the bases as measured by reference mismatch rate. targets DiagnoseTargets Short one line description of the walker. validation GenotypeAndValidate Genotypes a dataset and validates the calls of another dataset using the Unified Genotyper. ValidationAmplicons Creates FASTA sequences for use in Seqenom or PCR utilities for site amplification and subsequent validation validationsiteselector ValidationSiteSelector Randomly selects VCF records according to specified options. varianteval VariantEval General-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more) variantrecalibration ApplyRecalibration Applies cuts to the input vcf file (by adding filter lines) to achieve the desired novel truth sensitivity levels which were specified during VariantRecalibration VariantRecalibrator Create a Gaussian mixture model by looking at the annotations values over a high quality subset of the input call set and then evaluate all input variants. variantutils CombineVariants Combines VCF records from different sources. FilterLiftedVariants Filters a lifted-over VCF file for ref bases that have been changed. LeftAlignVariants Left-aligns indels from a variants file. LiftoverVariants Lifts a VCF file over from one build to another. RandomlySplitVariants Takes a VCF file, randomly splits variants into two different sets, and outputs 2 new VCFs with the results. SelectVariants Selects variants from a VCF source. ValidateVariants Strictly validates a variants file. VariantsToPed Yet another VCF to Ped converter. VariantsToTable Emits specific fields from a VCF file to a tab-deliminated table VariantsToVCF Converts variants from other file formats to VCF format. VariantValidationAssessor Annotates a validation (from Sequenom for example) VCF with QC metrics (HW-equilibrium, % failed probes) walkers Runs the desired analysis on the smallest meaningful subset of the dataset. ClipReads This tool provides simple, powerful read clipping capabilities to remove low quality strings of bases, sections of reads, and reads containing user-provided sequences. FindReadsWithNames Renders, in SAM/BAM format, all reads from the input data set in the order in which they appear in the input file. FlagStat A reimplementation of the 'samtools flagstat' subcommand in the GATK. Pileup Prints the alignment in the pileup format. PrintReads Renders, in SAM/BAM format, all reads from the input data set in the order in which they appear in the input file. PrintRODs Prints out all of the RODs in the input data set. SplitSamFile Divides the input data set into separate BAM files, one for each sample in the input data set.
This SNP calling tool might be good but it is hard to get started.
http://compbio.bccrc.ca/software/snvmix/
Its documentations are horrible, with no practical explanations.
This might work for beginners who just start learning English. But unfortunately, it is just a fake English school whose quality is dismal. I do not think they have proper oversight of skilled people. Their web postings are full of errors in their English: it is better than that of many Japanese but it is horrible if they claim they want to be an English school. It might be a cheap alternative to more expensive schools in real Anglophone countries, but people should know its limits. Or they might be just showing us that the perfect is the enemy of the good: it is ok to suck and you can overcome mental barrier and inhibition to speak and use English. But do not go there if you have a higher standard and motivation.
Transform a fasta with a continuous long line into a more amenable 100bp lines fasta.
bigfasta2short.pl
#!/usr/bin/perl use strict; #perl -pi -e 's/^\s$//' 454.contigs.11.fasta my ($in)=@ARGV; open F1,$in;my $len; my ($n1,$n2,$seq,$subs,$name,@x,$k,$n); while (my $s=) { if ($s=~/^>/ or eof(F1)) { if(eof(F1)) { $seq.=uc($s); #print "the end\n"; } #@xq=split '\s+',$s; #print "\n$s"; if($seq) { $len=length($seq); print $name; #print int($len/100)," $len\n"; for my $j (0..int($len/100)) { $k=$j*100; $n=($j+1)*100-1; #print "$k $n $j \n"; $subs=substr $seq,$k,100; if( $subs) {print "$subs\n"; } } $seq=''; } $name=$s; $k=0; $n=0; next; } chomp $s; #print $s; $seq.=uc($s); } close F1; exit; print "$name\n"; for my $j (0..int($len/100)) { $k=$j*100; $n=($j+1)*100-1; #print "$k $n $j \n"; $subs=substr $seq,$k,100; if( $subs) {print "$subs\n"; } }
transform an embl with bases to a fatsa: strip down annotation for all the embl files in a directory.
usage: perl embl2fasta.pl
#!/usr/bin/perl use strict; my @A=`find . -maxdepth 1 -name '*embl' `; print "@A\n"; for my $i (@A){ open F1, $i or die; my $out=$i; $out=~s/embl/fasta/g; open R1, ">$out" or die; my ($id)=($i=~m/.*\.(\d*)\..*/); print R1 ">$id\n"; while (my $s=) { next if ($s=~/^[A-Z]|\/\//); $s=~s/[0-9]|\s+//g; print R1 "$s\n"; } }
Check online journal advertisements. I found my first postdoc in Nature, though I never wrote a paper for it yet.
Check web pages of university or institute job opening. It is easiest way for them to find candidates. Very often they already have some candidates in their mind, and job interviews can be just a fake process they must go through.
Job opening information through people you know is the most reliable source of information, and very often you do not even have to go through an interview process before you get accepted.
Scholarship: if you do not like to be bounded to a given country, do not get a scholarship. Many Japanese scholarship expect you to come back to Japan. Also you probably have to write application in Japanese. And often they require you not to have lived in a foreign country more than a certain period. So if you studied in another country and do not feel like to write Japanese applications, you can just focus on getting an opening just like any other people.
Surprisingly you will be accepted as long as you have sufficient qualification, but it is very important to understand the task you are expected to do.
There are many good groups but getting into these groups is tough. And also there are many groups you do not want to get involved.
perl shuffleGZ.pl 1.fastq.gz 2.fastq.gz mate.fastq #!/usr/bin/perl $filenameA = $ARGV[0]; $filenameB = $ARGV[1]; $filenameOut = $ARGV[2]; open $FILEA, "gzip -dc $filenameA |"; open $FILEB, "gzip -dc $filenameB |"; open $OUTFILE, "> $filenameOut"; while(<$FILEA>) { print $OUTFILE $_; $_ = <$FILEA>; print $OUTFILE $_; $_ = <$FILEA>; print $OUTFILE $_; $_ = <$FILEA>; print $OUTFILE $_; $_ = <$FILEB>; print $OUTFILE $_; $_ = <$FILEB>; print $OUTFILE $_; $_ = <$FILEB>; print $OUTFILE $_; $_ = <$FILEB>; print $OUTFILE $_; }
It might be time to move on to another platform. BUT this just turned out to be my misconception…