It's a good question. Over the past number of years we've heard more and more about ‘Big Data'. Historians have, for decades, written macro or micro historical works. The idea is that if ‘Big Data' is bigger than a macro-history, a kind of Peta-History on an order completely overwhelming for an individual scholar, we've got to have something smaller than the micro to make dealing with such data feasible. We've got to conceive of the smallest historical datum that is usable for a wide range of scholars, but would also arm individual scholars with the means to chip away at Big Data sources around topics and data they're interested in. We know historical scholarship is a collective enterprise; Big Data can easily overwhelm a single scholar, but NanoHistory responds by arming historical scholars en masse to assess and critique, and mash up, the individual records and data of which Big Data and Open Sources consist of.