Testing my Vivitar 2x multiplier

First off, let me start by noting that this multiplier is ancient. There are no contacts for any automation so when you use it it’s old school photography and you have to control the aperature, shutter speed, and focus. Yes, you can do stopped down metering, but it’s not necessary. Of course, the multiplier causes a loss of two stops, so f/16 had to become f/8 when I added the multiplier into the mix in order to get the same amount of light at the same shutter speed. In this case, I used the Sunny f/16 rule and shot all the pictures at f/16 (or equivalent) and 1/180s.

The baseline shot, at 50mm (approximating the field of view of a human eye) is:

Now, here is the series of a Sigma 70-300m lens (70, 100, 200, and 300):

Now, the series of a Sigma 70-300m lens (70, 100, 200, and 300) using the multiplier:

As you can see, quite the impact! The series with the multiplier is slightly darker, a result of there being slightly less light hitting the CCD for capture, but given I snapped these quickly and didn’t do any clean up other than resizing, I’m pretty impressed at the optics in the Vivitar device as I didn’t readily detect any quality degradation. Of course, my eyes may not be as keen or critical in that regards as a professional might be.

As a side note, I never realized how weather beaten that silo was until I got a lens that close to it…

Valentin Imperial Maya

Some pictures from around the Valentin Imperial Maya in Mexico (2009). They’re sized 30% from the original. Remember, click on a ribbon image to get a larger shot and click on the larger shot to see the original (warning, some are fairly large).

A note on the resort… Like most in the Mayan Riviera, this was very good. If I had one complaint, it would be the lack of a buffet dinner option. Sometimes you want to just roll in, grab a quick bite, and roll out. That wasn’t an option here. Beyond that, the food was very good, the service was excellent, and the staff were awesome (especially the ones at the pool bar: Fabiana, Ramon, Audel, and Diblain).

Some essential tools for the amateur dSLR photographer in the wild

If you have a digital SLR, there are some great tools, software or hardware, out there that are either free or won’t totally break the bank. Of course, photography can be an expensive hobby, so your mileage may vary here.

In the Field

I bought an iPod Touch as a handy field computer when taking pictures in the great outdoors and this, alone, is pretty handy because you can store all sorts of reference material on it. I don’t use it for music (I have another iPod for that), I use it for photography. That brings me to a very handy application for it if you like nature photography: DoF calculator. This little app for the iPhone/iTouch allows you to select your camera model (or enter the baseline information directly), specify distance, focal length, and aperture to get not only the depth of field but also the hyperfocal distance. There are two or three other DoF calculators, but I found this one to the best. It’s also a massive $1.99 to buy.

I like doing nature photography and spent a fair amount of last summer wandering conservation areas taking a variety of pictures. At one point, during a camping trip, I got an opportunity to take some pictures of a great blue heron catching dinner. The pictures came out okay, but the problem was my telephoto is only 300mm maximum and I really needed a bit more than that. Needless to say, once bitten, twice shy, I don’t want to be caught with some awesome opportunity and no suitable lens to capture it, but the price of telephoto lenses above 300mm gets pretty expensive… There is a solution, though, and it’s significantly cheaper: a 2x teleconverter. Okay, at $370 it’s not cheap, but an 800mm lens is $7800! The doubler takes my 300mm telephoto to a 600mm telephoto, which isn’t quite 800mm, but it also didn’t cost me close to $8000. You can also get a 1.4x teleconverter for a little less money and little less range.

Bear in mind that a teleconverter will have some effect on the overall quality of the result, but it’s generally minimal and there are ways to deal with it after the fact. Also, a lot of lens faults are more to the edges and the average dSLR has a smaller sensor than standard 35mm film, thus dropping off the outer edges of the circle.

Get a good carrying case! I got a Lowepro Slingshot 300 that allows me access to a variety of compartments without removing the backpack from my back. A very handy thing when it’s not always feasible to take it off.

Get a tripod! Now, getting a good one can be expensive, so you’ll have to balance your needs with the price you can pay, but a tripod is essential to outdoor photography. Remember, a lot of outdoor shots will be taken with a telephoto and that makes camera shake much more likely and that includes cameras with shake reduction. A good rule of thumb is that your shutter speed should be no slower than the reciprocal of your lens length. So, for example, with a 200mm focal length, go no slower than 1/200th of a second. Better yet, just put the camera on a tripod. So, which tripod? Well, in the wild, you want to get something that is both light and sturdy. Mine is a 4 segment carbon fibre from Manfrotto, but there are very good cheaper options out there, so shop around. Just make sure that you can carry it for extended periods of time and that it has a hook for weighing it down in windy conditions.

That’s the tripod, but what about the head? Most consumer tripods come with the tripod head already, but most of these aren’t really the best option for nature photography. It’s okay when you have plenty of time to line up the shot, but most wildlife aren’t anywhere near that patient. So, a good tripod head can make a world of difference and so some features to consider are:

  • Quick release – allows you to remove the camera from the tripod without fuss.
  • Easily adjusted – I have a “joystick” style head that allows me to freely and quickly move the head by just gripping it.
  • Levelling bubble – useful if you want to make sure you’re square.

That’s the basics, but you’ll probably accumulate more stuff as you go. I’ve been adding to my field collection with plamps, translucent filters, small reflectors, etc. The hardware is the expensive part, you get hooked and then you start adding to your collection so that you can do more.

On more thought, don’t discount the use of a flash in the field. Mine, sadly, is a little under-powered, but it is still handy for filling in where the shadows are quite deep. The little flash on the camera itself is often useless in this role, so it’s worth looking for something a bit more powerful.

Back at the Computer

Lets face it, the best piece of software for post-processing your digital images is Adobe Photoshop (I have CS3), but it comes with a hefty price tag. There are, however, some alternatives that will let you get the job done and done nicely. Before I get into those, you are shooting in RAW format right? If you aren’t, then get a bigger storage card and find out how to do it for your camera model. If you shoot in JPEG, you’re giving up enormous control of your image for absolutely no gain, so switch your camera to RAW and forget it ever had a JPEG setting. Now onto the tools…

Raw Therapee is, by far, the best piece of software you can get for RAW image processing outside of Adobe Photoshop. It supports a huge variety of formats, can create JPEG files from your source (which is why you don’t need your camera to do it for you), and gives you a lot of fine-tuned control over white balance, exposure, colour balance, and more. The best part? It’s absolutely free and runs on Linux and Windows.

The GIMP (aka Gnu Image Manipulation Program) is the closest you’ll get to Photoshop without having Photoshop. It does a ton of stuff, is constantly being worked on, has a huge array of plugins and enhancements, and can be used for much more than image processing. Best of all, it’s free and runs on absolutely everything.

In terms of printing the final result, inkjet printers have come a long way. Yes, you can take your SD or compact flash card off to the nearest department store or photography place to get them printed, but if you’re like me and bought a dSLR because you were unlikely to take your film in for development, then a printer is a must. As with most things, price often determines quality and that’s basically true for inkjet printers as well, but it also very true for the paper you use. Avoid printing on regular paper, it’s just a waste of your ink because the final result will look like crap regardless of how good your inkjet is. Get proper paper.

For the printer, what you get will determine a lot of what you can print. Your basic point and shoot can, usually, print a nicely bordered image on 8.5 x 11 stock, anything larger will require software scaling and that can be iffy at best because the software has to “guess” (interpolate) the each new pixel being added. It’s easy to artifact as a result. On the dSLR front, the resolution of the camera will somewhat determine the approximate native size of an unscaled print. Mine, which is 10.1 megapixels, is about an 11 x 17 print unscaled.

So, based on what you want to print, pick your printer. Personally, I highly recommend the Canon Pixma Pro 9000, which is what I have. It has 8 ink tanks and will print upto 13 x 19 inch output. The tanks have a reasonable life and are individually replaceable and not too badly priced. The output quality is superb, Canon claims photolab quality and I believe it. Mind you, this the higher end of the pro-sumer market, and there are excellent alternatives for the less avid for a lot less money. Shop around.

By the way, don’t buy a printer under the assumption you’ll save money. You probably won’t. The advantage of your own printer is time and instant gratification, along with a lot more paper options, it’s not money.

Conclusion

Okay, dSLR photography isn’t cheap, especially if you want to get out there and take great shots of our wilderness. However, if you have reasonable financial means, you can do some really great stuff and not break the bank doing it. Build up your collection slowly and have fun.

Managed C++ or The good, the bad, and the ugly.

A little while ago, in a discussion with a co-worker, I was describing some of the quirks I’d discovered while playing around with managed C++ and coined the expression ‘C+-‘ for it. He got a pretty good laugh over it. It is, however, somewhat appropriate…

This is not an indepth review of the language, just a few small observations that I’ve come up with as I’ve been playing with it. There are a lot of good reviews out there and, in general, using Managed C++ is not a whole lot different than working with C# in the .NET environment.

The Good

  • The only .NET language that can mix managed and unmanaged in the same code. This may not seem like a monster win, but it is. The .NET framework and C++ are very powerful and this combination gives you access to the classes in the framework while giving you the ability to go to as low a level as you need when performance is important. You can’t do that in C#, for example.
  • Some of the common C++ coding errors are reduced or even eliminated. I do a lot of code reviews and one of the biggest things that keeps coming up is memory leakage. An awful lot of people forget to clean up memory when they’re done and, over time, that makes the software unstable. Managed C++ won’t eliminate this, but it reduces it. If you don’t think this is valuable, being an old-school C++ guy, then you should take it up with the C++ standards committee, a similar memory management model is slated for the next for revision of the standard.

The Bad

  • Some of the language semantics are weird. For example: String^ a = gcnew String(); This gives you a “handle” to a string on the managed heap. This is compiler lazyness. There are two things that indicate that the object is on the managed heap: the ^ symbol instead of a * and the use of gcnew vs new when constructing the object. One of the two is needed, but both? After the object is created, either way, you’re going to use the same semantics in accessing their methods and members. Net effect, the compiler should be able to tell which heap has the object based on the use of ^ or the use of gcnew.
  • No bitfields in a managed class or struct. Obviously you can work around this by making the bitfield unmanaged, but there is a cost associated with mixed code. More importantly, however, the implication of this lack is that your control over memory usage is massively reduced and memory control is a big feature in C++. Basically, your control over memory packing is non-existent.

The Ugly

  • Visual Studio insists on sticking everything in the header. When I say everything, I mean everything. The project will get created with a .h and .cpp file, but the .cpp file is basically empty, containing only a #include for the header. This is simply for the compiler, everything else lands in the header file. Huh? I’m okay with simple methods being in the header, usually one liners, but massive methods are stuffed in there as well. If you’re like me, you’ll end up moving the code around, but be careful since that can mess up the IDE if you’re doing any Windows forms development.
  • Visual Studio insists on sticking the visibility declaration (public, protected, private) on every entry. While it’s not a big deal, C++ is NOT C# and it just looks ugly. I end up cleaning this up, but it’s a pain. Generally, I end up avoiding the visual designer after the initial layout just to avoid this and the previous complaint.

The Conclusion

I’m not quite the C++ purist as my aforementioned co-worker, so managed C++ (or C+- if you prefer) is not a language I’m going to avoid. There are some ups to it and some downs, but I’d rather use it than C# when it comes to .NET development. Anyways, these are just some minor thoughts that have come to me as I’ve worked with the environment, your mileage may vary.

Postscript

If you’re looking at doing Windows forms development with managed C++, be careful with x64 vs x86 and third party libraries. A lot of control libraries are written in C#, which is fine, and are compiled for “Any CPU” allowing them to be used by 32-bit and 64-bit code in .NET, but not all are done that way. If the control is explicitly compiled for x64 it will be unusable in the forms designer. You can use it, but not by dragging it on to the form in the designer, it’ll throw an error. This is because Visual Studio is a 32-bit app and can’t work with visual controls explicitly targeted to the x64 CPU. Just an FYI.

Witnessing History

There are moments in history where you always remember where you were when it happened. You hear this from people who watched the first man on the moon and you’ll hear it again after tonight. Tonight, for the first time in the history of the United States, the offices of President and Vice President will not be held by two white men. Think about that, in the “Land of the Free” they’ve never elected anyone but two white men, but that will change tonight. It’s historic and we get to witness it.

What I think is that, tonight, we’ll see a very visible sign of Martin Luther King’s “I have a Dream” speech. I think we’re going to witness the first black President of the United States, 143 years after the American Civil War. Think about that: 143 years. It’s incredible that it’s taken this long in some senses, but not entirely. If you think about the fact that, in 1968, the United States Supreme Court declared all forms of segregation unconsitutional, then you only have 40 years where blacks were, at least. legally equal to whites. 40 years. That’s a nothing. In a sense, America, has come a long way in a short time. Good on them.

Welcome to history, you’re fortunate to be witnessing a defining moment and so I suggest that you mark your location now. After all, the “where were you when Obama became President” question is going to be something you’ll need to answer.

Opinionated Photography

Bad Behavior has blocked 718 access attempts in the last 7 days.