Our experimental analysis reveals the performance power of employing software combining for achieving recoverable synchronization and for designing highly-efficient fundamental recoverable data structures, such as stacks and queues. Another technique we will present addresses more complex data structures, such as linked-lists, and tree-like structures that implement sets. Our experimental analysis introduces a new way of analyzing the cost of persistence instructions, not by simply counting them but by separating them into categories based on the impact they have on the performance. The analysis reveals that understanding the actual persistence cost of an algorithm in machines with real NVMM, is more complicated than previously thought, and requires a thorough evaluation, since the impact of different persistence instructions on performance may greatly vary.