I’ve been laboriously going through old negatives and while you can use services like ScanCafe to do this, I find that it isn’t the scanning time, but the time spent looking through the negatives and getting clues from the photos themselves (like when the prints were made) that is the real value. Today, we never have to worry about where a photo is taken or when, but getting the timeline back 10 years ago isn’t so easy.
And I love the advice from scanyourentirelife.com
Here’s my workflow so far:
- Organized the negatives in a bin by rough date of printing. Most negatives come in an envelope and you can figure that out.
-
Go through each negative set starting from frame 1 on. I use vuescan to do this and an old Minolta Dimage 5400. This thing only costs $100 or so on eBay and is well worth it for the quality. Takes some time to learn how to do it, but getting 5400 dpi scans with good quality is pretty amazing. I normally just leave them in “loose” jpegs of 10-12MB rather than 100-400MB TIFFs with little loss in image quality.
-
When you go through, create a series of directories with Year/Month/Date organization. You will find that you will move files around quite a bit as you figure out the timeline. That’s because development date of the film which is easy usually isn’t the same as when you took the photo. I seem to have always left things for a month or two before developing. And in those days, it would take sometimes a year to expose a whole roll for casual photographs.
-
When you figure out the timeline, write it down on the outside of the envelope and note the date when it was scanned. I’ve rescanned images so often, this is a hard lesson learned!
-
As you go through each year, have a marker so you don’t forget where you are. This process takes a while.
-
File naming. It is incredible how just looking at a photo takes your right back to the place and time. I actually name the files with the
date - people - location - film type - roll - exposure number - scan date.jpg
because the title seems like a good human readable place to put things. It takes longer, but helps. the Mac OS X Rename (you get it by right clicking on the a set of files) is really useful when you want to do a big rename. OK, I haven’t done exposure number, but given that order is important makes some sense and there is usually from the photo processor some sort of roll number which makes the thing unique. The longer the better in some ways -
Metadata. To actually hack at the metadata of the jpg, I usually use iPhoto (not Photos, it has an annoying bug unless you suck all your photos into it). I’ve tried Picasa as well, but like iPhoto it is now orphaned software. Sigh. I use it to change the date on the photo. This is useful since most programs read this to do date sort. The Batch Change is super useful as it allows you to say a series of photos should be 5 minutes apart so you get a rough timeline. I do select modify original photo.
-
Then I look at the photo and figure out the rough location. I use the location feature in the Info page to place the photo. One of the reasons I use long file naming is that metadata is incredibly hard to get right. No one really focuses on it. The date seems to work but just getting location right and putting it into the exif seems hard. iPhoto has its own database of faces, locations, but doesn’t seem to have an easy way of attaching it to the photos. I need to spend some time finding the right tool. Picasa seems to add face information in the photo, but I’m confused by the format. More work here.
-
Faces. Yes, we will all forget and when your grandchildren are looking at these photos (a real possibility with digital), it would be nice to have some more information, This is really time consuming and I’ve tried iPhoto, Picasa and some others doing automatic detection, but they seem to miss most faces, so it is a matter of going through each one and tagging. The big issue is how to get the data out of iPhoto and ideally into every jpeg.
Tips and tricks with Vuescan
This is a nice tool for scanning but it is complex. I normally leave it in professional mode to get maximum control and here is how I do it:
- Autoskew. Somewhere I read, you should turn this off, again to get the most resolution and with film, you don’t often have misregistration
Grain dissolver. This sounds like software, but is actually a softer light in the Minolta itself. Sounds neat, so I use it
Multi exposure. Another that apparently turns the lamp on lower and the higher, so you get a little bit more dynamic range.
- Crop. This is the hardest to get right, the Auto setting doesn’t seem to work well at all. So I normally have to go through each and adjust. This is sticky between scans, so you have to do it each time. Ugh pain.
Infrared clean. This is a separate plane where the thing can remove scratches from the film. Pretty neat. I leave on light as most of the negatives I have are pretty clean. I don’t touch the other settings because most of these are digital and if you really have to do it, I would use Photoshop or something.
Color balance. I find Auto comes out pretty blue, so I normally use Whitej balance
White point. I find this to be one of the most important settings. The default is set to crush 1% of the whites. I normally leave it at 0.001% unless the photo is too dark. It probably hurts dynamic range a bit, but I use a 16-bit color space so less of a big deal I hope.
Then there are things that vary depending on how high quality a camera and lense were used, so for point-and-shoot cameras with relatively low resolution lenses, here is what I set:
- 5400 dpi Resolution for jpeg There is much written on this, but basically, a crappy old point and shoot with it lense probably isn’t more than a 3-4MP camera. But film is an analog medium, so how to figure out what resolution. Most of the blogs I’ve read said the effective film resolution is probably about 4000 dpi, but my old Minolta only does 2700 dpi or 5400 dpi. In the end, I figured that disk space isn’t much of a problem anymore, so for crappy prints, I just leave it at 5400 because a jpeg is still just 10MB or so with 91 quality compression. Even though for really bad point and shoots, 2700 I’m sure is fine and ¼ the size roughly.
Color space is AdobeRGB for jpg. This is one of the hardest questions. JPEGS are 8-bit so for crappy point-and-shoot, I used JPEG with Adobe RGB color space to get a slightly wider gamut although you have to pay attention when you put it out on the web, but if you are never going to print then sRGB is going to be fine as most monitors can only handle that whereas AdobeRGB is more for wider gamut printers.
For really nice shots with a high quality camera (my old Nikon N80) and nice lense (the 50mm f/1.8) and with good developing (A&I), then you have to have a lot more disk. But if I know the photo was taken with a high quality SLR like a Nikon N80 with a good lense, ‘I’ll go to:
-
5400 dpi for resolution.
-
Color space ProPhotoRGB which gives 16 bits worth of colors at the cost of gigantic TIFF files typically 300MB. If I really care, I’ll produce scanner raw which is more like 200MB and and is actually a RGB64 file as it includes the infrared layer as well, so it is really archival.
-
It can produce a Raw TIFF file so you get 16 bits of color but even with compression, an image is 300MB at 5400 dpi vs 11MB jpg. With a raw output you get 64 bits (16×3 plus 16 bits for the infra-red scanner, so this is the closest to raw) and you have to post process this monster RGBI file with view scans