Since 2009 I have worked with students at Vassar to create ObjectVRs of clothing from the Drama Department’s collection. This process involves mounting the garment on a mannequin, placing it on a large turntable, lighting it, and taking at least 18 high resolution still photographs at defined intervals of the rotation. These stills can then be “stitched” together using software such as Object2VR and output as QuickTime or HTML5 movies that allow interaction from the user, who can rotate or zoom in upon the object as desired. It would require hundreds of close-up shots to capture the detail that is revealed when zooming, but in this format the user has the added benefit of experiencing a sense of the three-dimensionality of the object. The hardest part of this process is properly mounting the garment on a mannequin and lighting it, which is true even if only a single front view is going to be photographed, so I find it extremely worthwhile to photograph the objects in this way when they are already mounted for an exhibition, considering it could be decades before the object will be mounted again.
See an example objectVR view at http://vcomeka.com/vccc/VR/VC2004.001/VC2004.001.htm
All objectVRs currently in the Vassar digital collection are listed at http://vcomeka.com/vccc/items/browse?collection=6
In 2010, one of my students created a series of YouTube videos to share our process for creating objectVRs, which you can see at http://www.ardenkirkland.com/work/portfolio/360-photography-tutorials/. However, our process has evolved since that time to use a different software, Object2VR. I shared the steps of processing in that software in a session at THATCamp Museums NYC, at the Bard Graduate Center in 2012, with instructions at http://www.ardenkirkland.com/work/wp-content/uploads/2014/12/ObjectVRinstructions.pdf.