The Trouble with Explicit Loading and Saving

Much of today’s users experience with common software is still shaped by a bottom-up approach, starting from what has been possible within the constraints of hardware from long ago. It’s like most what we have resembles Assembler, C and C++, not Lisp, Smalltalk or Haskell.

Of course, some of the constraints are still there and won’t magically disappear because you take a bottom-down approach, starting from the user, not the hardware side.

Take the explicit loading and saving of Files. The reason for it is the lack of affordable memory that offers relatively high capacity, is fast enough to not be too much of a bottleneck, and that is persistent even without power. The usual workaround is using hard disc drives and RAM of rather limited capacity.

The need for loading and saving files, even when no removable media is involved, can’t be understood without this technical background. Actually, a naive user might not be aware or think of even the mere existence of a hard disc drive and RAM. It’s all just the computer. Without a sufficient mental model, no predictions can be made. Instead of confidence and flexibility, you will likely see clinging to rituals known to be safe, but that might contain unnecessary steps.

The split between a file on disc and a file in memory becomes interesting, if you open one file in several applications, or move or rename a file you have loaded somewhere. Instead of having one thing in one place with one current state, you get 2 or more things (or one thing with several states) in 2 or more places. If you want to rename a file you are currently working on (and exactly that work might be what leads to a better name), you either have to close, rename the file and load it again, or use Save As and later on remove the old version. Too complicated for a task that should be atomic.

Ideally files should just appear open. Progressive rendering, Caching and modularised applications where viewer components are loaded first, could help to approach the ideal.

Having to safe explicitly sucks. It’s not unreasonable, and I suspect common, for first-time computer users to expect changes to just persist, as that would be in line with real world experiences. If no revision control is involved, you might run into situations where you have to think about whether saving the current version might destroy a previous version you might want to keep. Saving shouldn’t be destructive.

Maybe one day we will have memory that combines all the desired characteristics, allowing real persistence and immediate access. Until then, it should be considered to mimic persistence without destroying data by automated commits to a revision system. Finding the right strategy regarding power consumption, noise and safety is tricky, of course. In the most simple case, such a system would permit you to just keep working, never interrupting your thought with repetitive management tasks like saving. More advanced use would include tagging states to return to them easily (resembling commits). Selected states could be collected in sets, resembling branches with a carefully crafted history, so they can be published. Thus instead of a hard break between simple use and the needs of software developers, there could be a progression.

About thorwil
I'm a designer from Germany. My main interests are visual and interaction design, free/open-source software and (electronic) music.

9 Responses to The Trouble with Explicit Loading and Saving

  1. Anonymous says:

    I’ve read about this and other similar software idiosyncrasies in Why Software Sucks by D. S. Platt.
    Our goal should be to make computing easier. We just need to be careful to avoid too many leaky abstractions.

  2. I’m not actually advocating Vim as a paragon of software usability, but I find it interesting that Vim 7.3 has persistent undo. You can save a file, exit the editor, launch it later and then undo changes you made in the previous editing session.

  3. Anonymous says:

    And I forgot, I think that the most important thing missing in most of todays programs and systems is non-destructive editing and versioning by default.

  4. Janne says:

    Interesting. That said, the most natural way I find of making a branch of a document (a new edit of an image, say, or a new presentation based on an earlier one) is to open the original, start editing it until it’s clear I’m on the right track with it, then save it under a new name and new location, effectively creating the new document. If I’m not happy with it I just discard the opened file and start over. You’d need some way to accommodate that kind of workflow.

    Also, touching the original is problematic in some cases. A text editor can have persistent undo, but it’s much more difficult with things like image files and video files. Diffs will take a _lot_ of space – you can’t save the operations you did, then do them in reverse since these file formats are lossy. You can’t ever go back to your previous state without explicitly or implicitly saving all previous versions of the document.

    • Davorin Šego says:

      Why not save diffs as set of operations that were applied, like a graph of changes. For example, editing a picture would produce something like this: contrast 50% -> crop 10px 10px 200px 200px -> sharpen 5%.
      Then you could save these transformations as a json document in CouchDB. With CouchDB you have automatic revisions and can sync with remote computers.

      • Davorin Šego says:

        By the way, Novacut is doing something similar with video. There aim is to enable collaborative video editing.

    • thorwil says:

      I think such a workflow would be covered by just tagging the first state, edit, then create a new tag if satisfied. If a new location happens to be on another storage device, things would become a little more interesting, but relations between different versions stored on different devices should be tracked and kept intact, anyway.

  5. Thilak Nathen says:

    I think abstracting computing concepts and paradigms are incredibly dangerous for both the end user and developer. Regardless of how powerful the computer becomes, it is important to ensure the end user isn’t dumbed down to the point that they don’t understand files or what a save operation means. A save operation is saying, “I’m gonna commit this and I know it’s ok”.

    • thorwil says:

      I agree in as far as we have to be vary of leaky abstractions. However, a well done abstraction means that the user does not have to know or care what’s below.

      You speak of dumbing down. We should be speaking about minimising the required knowledge to get the job done, such that users can concentrate on their actual objectives. That’s not dumb, that’s focus!

      And no, currently, a Save operation can also mean “if the app crashes, I will be better off having stored this version” and sometimes it means “I made such a strong habit of saving frequently, that I just thoughtlessly saved this bullshit over the good version I should have kept”.

%d bloggers like this: