The Titan supercomputer here at ORNL was recently unveiled as the world's fastest (for now) and led to several of us talking on various news outlets such as the Washington Post, BBC, NPR and CNN. Certainly, us "apps guys" congratulate the hardies on the big iron they have managed to design and construct, and as we move towards the exascale there are exciting possibilities as to what theoretically one could do with such machines. But, now, more than ever, it has become clear that the promise of these supercomputers will remain unrealized without concerted and sustained methodological and software development. As far as applications in the biological and soft matter sciences are concerned, little emphasis has been put on these aspects. Although, thanks largely to our own, unfunded efforts, we are just about able to scale on TITAN, doing better won't happen without new methods being developed by teams of theoreticians over several years. It makes little sense to me to invest billions of dollars over a decade or so in these machines but virtually nothing in what is needed to make them work well, does it?