The Trouble with Explicit Loading and Saving
2011-02-13 9 Comments
Much of today’s users experience with common software is still shaped by a bottom-up approach, starting from what has been possible within the constraints of hardware from long ago. It’s like most what we have resembles Assembler, C and C++, not Lisp, Smalltalk or Haskell.
Of course, some of the constraints are still there and won’t magically disappear because you take a bottom-down approach, starting from the user, not the hardware side.
Take the explicit loading and saving of Files. The reason for it is the lack of affordable memory that offers relatively high capacity, is fast enough to not be too much of a bottleneck, and that is persistent even without power. The usual workaround is using hard disc drives and RAM of rather limited capacity.
The need for loading and saving files, even when no removable media is involved, can’t be understood without this technical background. Actually, a naive user might not be aware or think of even the mere existence of a hard disc drive and RAM. It’s all just the computer. Without a sufficient mental model, no predictions can be made. Instead of confidence and flexibility, you will likely see clinging to rituals known to be safe, but that might contain unnecessary steps.
The split between a file on disc and a file in memory becomes interesting, if you open one file in several applications, or move or rename a file you have loaded somewhere. Instead of having one thing in one place with one current state, you get 2 or more things (or one thing with several states) in 2 or more places. If you want to rename a file you are currently working on (and exactly that work might be what leads to a better name), you either have to close, rename the file and load it again, or use Save As and later on remove the old version. Too complicated for a task that should be atomic.
Ideally files should just appear open. Progressive rendering, Caching and modularised applications where viewer components are loaded first, could help to approach the ideal.
Having to safe explicitly sucks. It’s not unreasonable, and I suspect common, for first-time computer users to expect changes to just persist, as that would be in line with real world experiences. If no revision control is involved, you might run into situations where you have to think about whether saving the current version might destroy a previous version you might want to keep. Saving shouldn’t be destructive.
Maybe one day we will have memory that combines all the desired characteristics, allowing real persistence and immediate access. Until then, it should be considered to mimic persistence without destroying data by automated commits to a revision system. Finding the right strategy regarding power consumption, noise and safety is tricky, of course. In the most simple case, such a system would permit you to just keep working, never interrupting your thought with repetitive management tasks like saving. More advanced use would include tagging states to return to them easily (resembling commits). Selected states could be collected in sets, resembling branches with a carefully crafted history, so they can be published. Thus instead of a hard break between simple use and the needs of software developers, there could be a progression.