From time to time I have the need to write ETL programs to move data from one data source (sometimes sql server) to another data source (often times sql server). If things are going good this usually is because I'm migration data from a legacy app to one of our newer applications. Performance tends to be very high on my list of requirements. Ensuring a quick turn around time between making a change and seeing the result is very important in these types of apps.
Although these applications are typically .net console apps that are created to ultimately run one time the reality is that they can turn into pretty complex programs and I feel they must be coded with "production" code. I don't typically create unit test for them but that has not always been out of the question.
With that said though I don't want to use "heavy" tooling like NHibernate or Entity Framework to extract and load data from the data sources. When I first started writing these programs I used ADO .Net but quickly fell victim to loads of duplication from copy and pasting. When .net Micro ORM's started to become more popular it seemed like these would be a perfect fit and for some time I felt they were.
Switching to a micro ORM help my code writing and reading significantly which allowed me to focus on other aspects of these programs. Performance bottlenecks from querying and inserting data became more and more frustrating. I found myself writing caches for my query results which helped but often seemed clumsy and inconsistent. Caching my query results did speed up my queries but it then became very apparent that I needed to start bulk inserting data because my inserts were taking way too long.
That's when I created CacheRepository. The tool is essentially just a wrapper over top the micro orm Dapper .Net and it extension Dapper-Extensions. The truth is though I could have wrote this using just about any micro orm.
I've tried to stress usability as much as possible so go try it out.