At 100GB per whole genome, storage and computation is always a challenge. Storing and processing this data so one can access any piece of the data on demand is one challenge. We have built multiple mechanisms for this, including Map-Reduce based stacks that can scale as more genomes are added. Engineering algorithms to run fast is another challenge. Typical processing of this raw data often takes tens of hours on a single machine. Faster times are usually achieved only by special purpose machines or larger clusters. We have worked extensively on extending single machines with graphics cards (GPUs) to achieve end-to-end processing in just a few hours.