Tuesday, 30 April 2013

Making 3D Scans of Artefacts for Community and Public Archaeology

Part II- The Nitty-Gritty of 3D Scanning

Click here to see Part I of this post, where I discuss some background about 3D scanning in community and public archaeology.

To test out the potential of 3D modelling in archaeology, I made 3D scans of two artefacts on loan from 
Parks Canada at Western University. These artefacts were collected from Mercy Bay, NWT by a Parks Canada team in 2011. The first is a bifacial knife from the Paleoeskimo period, dating to around 2500 BP. The second is a historic bone wound pin. I chose these artefacts for a few reasons. I thought they had simple shapes and colours and would be relatively easy to model. I also wanted to see if there were any differences between scanning lithic and bone materials, especially since the bone artefact was heavily weathered. I made one mistake, though: these artefacts are both pretty big. In retrospect I should have chosen artefacts that were smaller, so that I had smaller files to work with and more options in terms of which scanning devices to use.


Photograph of biface (Catalogue No. 130X232 B26:1)
Photograph of wound pin (Catalogue No. 130X232 A11:1)

I conducted my scanning using the facilities at the Sustainable Archaeology (SA) Centre in London, Ontario. This is a new centre developed through collaboration with Western University and McMaster University as a physical and digital curation centre. Luckily for me, they have 5 digital scanners on hand and a few people who know how to use them. I entered the SA with no prior experience in 3D scanning or modeling. I learned how to use scanners using a very helpful manual created by the SA staff, and I relied heavily on tutorials online to work various software programs. In particular, I tried to follow the methodology made available by the Idaho Virtualization Lab with their Virtual Zooarchaeology of the Arctic Project. These manuals and tutorials were extremely helpful, though I very often relied on the SA staff to bail me out when I ran into problems. (Thanks, guys).

From here, I’ll go through the steps I took to make the 3D models.

Step 1: Scanning the Artefacts

To scan my artefacts, I used the macro 3D3 white-light scanner at the SA. This type of scanner projects light and then captures images using two synchronized cameras. The texture and colour of artefacts are captured using a 16 megapixel Canon Rebel T3i camera. A turntable rotates the artefact so that images can be taken from all angles at each position. I used FlexScan 3D software to capture these images. I learned the hard way that the cameras and turntable must be calibrated carefully and properly with each use. I could have saved myself a whole day of blood, sweat and tears by following these instructions more carefully. That one’s my bad. Don’t make my mistakes. More detailed information about calibration, how it works, and why it’s important is available from the good folks at 3D3 Solutions.

Once I calibrated everything, I placed the artefacts on the turntable and then adjusted the cameras to make sure everything was in view, in focus, and with an appropriate amount of light exposure from the projector. After a good test scan, I scanned the objects using the cameras and the turntable. I set the cameras to capture 12 images per rotation, so that the turntable would rotate 30 degrees between each image.

My workspace: the setup of the 3D3 digital scanner at the Sustainable Archaeology Centre
After the cameras collected images of a single rotation, FlexScan 3D converted the 2D images into 3D objects and attempted to line up the 12 scans created from the images. If the cameras and turntable were properly calibrated, it will probably look something like this:

A set of 12 raw images taken of the wound pin artefact
Before I had correctly calibrated the cameras and table, my images looked more like this:

12 images of the wound pin artefact, scattered because I had improperly calibrated the cameras

You’ll notice that the cameras also picked up the foam blocks used to elevate the artefacts. I cleaned up this “junk” using the FlexScan 3D software simply by highlighting and deleting unwanted parts of the scans. Then I selected all of the images and aligned them together and combined them into one scan file. Then I repeated this process several more times. The object must be placed in several positions in order to take images of all sides and faces of the artefact and the number of positions depends on the complexity of the artefact. I used 12 positions (144 images) for the biface, and 6 positions (72 images) for the wound pin.

After I was satisfied that I had good images of all parts of the items, I manually aligned all of my scans roughly to the shape of the original artefact. I found the 3D navigation widget to be especially helpful in this step. I then used aligned and combined these scans into one scan image.

At this point it’s important to be sure that there are no large holes in the project. If there are large gaps, then additional scans can be taken in different positions to fill these. When finished, I finalized the images and chose for FlexScan 3D to fill holes in the model. This program can export in a variety of formats. I chose Object (.OBJ) files, as these are the most conventionally used in processing software. Here are some screenshots of these files, still in FlexScan 3D.

OBJ files of wound pin and biface artefacts in FlexScan 3D

They’re certainly not perfect. The biface in particular was very difficult to scan and align around its edges. Since it is a lithic tool, it has very sharp and complex edges with many faces. I took many scans of the edges from different angles, and was still unable to fill all of the holes along them. The wound pin was an easier scan in some respects. The only hole was a small one on the very tip, and otherwise the model was complete. However, the wound pin scans required more tedious and time-consuming cleanup. I did not anticipate this issue, but the texture of the weathered bone artefact was almost identical to that of the foam blocks it was sitting on. It was often difficult to distinguish the border between the wound pin and the foam, and the edges of my clean-up scans were often pretty sloppy. I eventually started using a cardboard box to elevate the wound pin, and this made it much easier to differentiate between the materials in cleanup.

Step 2: Processing the Scans

The 3D scanners and FlexScan 3D only create rough 3D models. After these are created, they can be further processed and edited to smooth surfaces and fill holes using a variety of different types of software. I used MeshLab for this. I imported the OBJ files into MeshLab, and used to Fill Holes tool to clean them up. Then the final product can then be exported once again for further processing.

The biface artefact in MeshLab, after holes were filled
Step 3: Texturing

.OBJ models contain the spatial information of an artefact, but not its colour or texture. I tried out a few different ways to add these to my models.

Texture using FlexScan 3D

Firstly, I used the colour information that was captured in the original images taken in FlexScan 3D.  Here’s a more comprehensive video tutorial that explains the process. Unfortunately for me, the scanner I was using does not have colour capabilities. All other 3D3 white-light scanners at the SA do, but my artefacts were too big to use with those scanners. There are certainly advantages to this method of texturizing models. It’s relatively quick, as it doesn't take any extra time that wasn't already spent in FlexScan 3D. It has one major disadvantage though: the textured files must be saved in a Polygon (.PLY) file. These are different than .OBJ files, and they can’t be opened with many types of processing software. In fact, the only software that the SA has to use these files was unavailable while I was working, so I could not export my textured .PLY files into any other software. I did find a site, 3dFile.io, to upload my .PLY files for use online. Click here to view and rotate the file for the wound pin artefact.

Monochrome .PLY textures of the wound pin artefact
Texture using ZBrush

Texture can also be applied to 3D models using 2D photographs, in a process that is reminiscent to me of georeferencing aerial photographs in mapping. To do this, I used ZBrush. ZBrush is a very powerful and sophisticated piece of animation software, and as such it is not easy to use. I spent several frustrating hours just learning the basic controls of the software. For me, this was actually the most difficult and time-consuming aspect of the project. There are many tutorials that can help out in this process. I benefited greatly from this video tutorial that goes through the basics of navigating ZBrush. After learning what the hell I was doing (and to be fair, I still don’t really know), I ended up having a really good time with this.

For this part of the project, I took photographs of the biface using my own camera (a 12.1 megapixel Canon SX 130 IS). This camera is certainly not professional quality, and I thought about trying to track down a better camera for the purposes of this project. However, I also found that so much detail was lost in the models when they were exported out of ZBrush and into their final form for the web that it would have been an unnecessary effort for this small project.

To make the textured models, I followed this video tutorial. I imported my .OBJ biface model into ZBrush as a tool, and I imported my photographs of the biface as documents. I then aligned the 3D tool and the photographs, and I used the ‘Projection’ brush to copy the information from the photograph onto the 3D tool. As ZBrush is projecting a 2D image onto a 3D surface, this process faces the same issues of projection in mapping. It’s best to frequently move and rotate the 3D model to make sure you get an accurate projection onto the model.

Projecting photographs onto the biface OBJ model
Once the projection was completed, I made a texture map to export. To do this, I followed much of this tutorial. After crashing ZBrush several times, I learned that my files were much too large (700,000+ vertices) to create any texture maps, or really do anything at all with the models. So I reduced the number of polygons in the .OBJ file using the Geometry function in ZBrush until I only had around 15,000 vertices. As I came to learn, the loss of overall quality is inevitable in this process. After my project was reduced in size, I created a UV texture map, and then exported that as a Bitmap.

Texture for biface artefact, "unfolded" and flattened into a 2D Bitmap

Step 4: Applying Texture to the 3D Model

In order to apply the new texture map to my 3D model, I used Adobe Photoshop CS6 Extended. The folks at the SA usually use Maya for this part of the project, but for whatever reason the Maya gods were not looking fondly at my project and I had to look elsewhere. Photoshop actually worked really well for my purposes. Its new 3D capabilities (only in Extended versions) are not nearly as extensive as those in Maya or other animation software, but it had everything I needed for this project and I found it much simpler and more intuitive to use. This really great site has several tutorials that detail the 3D texture capabilities in Photoshop. To apply the texture map to my 3D scan, I simply imported both my .OBJ file and my Bitmap texture map and then merged down the Bitmap on top of the .OBJ file. After merging, my project and it looked like this.

Biface artefact in Adobe Photoshop CS6 Extended
There are a few different ways to export the finished files from Photoshop. As my ultimate goal was to make a 3D PDF using Adobe Acrobat, I knew that I needed a Universal 3D file (.U3D). It’s a little bit trickier to export this file type. I followed this tutorial and only exported the 3D layer as a .U3D rather than exporting the whole file.

Step 5: Creating a 3D PDF

The easiest way to make an .OBJ 3D model available for sharing and viewing online is through a 3D PDF. In order for others to view it, they only need  Adobe Reader. Creating a 3D PDF is actually really easy in comparison to the rest of this project. I imported my .U3D files into Adobe Acrobat X, and it created the 3D PDF. Simple! This isn’t the only way to make a 3D PDF, of course. There are other programs that can create 3D PDFs or otherwise convert other files to 3D PDF format. Here’s a Wikipedia page that lists some programs, though I should note that I haven’t verified any of these so I can't speak for their utility.

Step 6: Sharing the 3D Models Online

Making 3D PDFs was easy, but sharing them online was a headache. Ideally, I really wanted to embed the 3D PDFs into this blog post, but it didn’t work out. Embedding 3D PDFs is definitely possible in some websites. Basically, it is possible in most cases where it is also possible to embed a document. Unfortunately, it is not possible to embed documents within Blogger. Instead I published my files through my colleague Laura Kelvin’s website Archaeology Stories, which is designed using Wix. On this post you can see the embedded .PLY files that we uploaded using 3dFile.io. We uploaded the 3D PDF of the biface using the Wix design features. The 3D PDF could not be embedded, but it can still be interacted with after clicking on a linked icon. I’m very pleased with how this worked out, and I’m optimistic for future plans in embedding my 3D models into websites. Laura will be creating a website for our archaeology theses projects later. Even though she probably won’t use Wix again, I feel that we both gained some valuable experience as we fumbled to get these models to appear in the website.

Note- if these models on Archaeology Stories won't rotate for you, update your Adobe Reader.

Additionally, I uploaded my 3D PDFs of both artefacts into Google Drive. Click here for the biface, and here for the (untextured) wound pin. The files can't be rotated through the Google Drive viewer, but I made them public so that anyone can download them to their own computers and open them with Adobe Reader.


Biface 3D PDF in Adobe Reader


My Finished Products

You’ll probably immediately notice that my finished models aren’t perfect. The biface has holes in it. I took hundreds of images at a variety of positions and I used hole-filling software, but I clearly couldn’t fill the holes along the edge of the tool. For future projects, I hope to be able to try out other processing software (GeoMagic, maybe) and further research 3D scanning techniques to make sure there are minimal holes after the first round of scanning. The shape of the wound pin looks alright, in my opinion. I didn’t have time to create a coloured texture map for it using ZBrush, but I’m pleased with the .PLY file. I didn’t find any major differences between scanning the two objects, other than the previously-mentioned issues involved in the texture of the wound pin against the texture of the supporting foam blocks. Overall, I’m happy with these as a starter project, though they would certainly need work if they were to be part of a larger archaeology project.

Potential of 3D Models in Community and Public Archaeology

After completing this project, I still feel that 3D models can be a useful tool in community or public archaeology projects. The 3D models seem pretty cool and engaging to me, though I could be biased about that. More importantly, the models are representations of artefacts that can be easily shared online with a descendent community, local community, or the general public. I made my models available to anyone, though privacy settings could easily be altered in websites or on Google Drive to restrict access for sensitive materials.

That being said, there are some potential issues that should be considered if taking on a project like this one. Firstly, both the scanners and software are very expensive to purchase, and very few facilities have them for use for archaeologists. I was lucky that the SA was willing to let me use their scanners and computers, but without their support it wouldn’t have been possible. 

Secondly, this is not a quick or easy process. The learning curves to use the scanners and equipment are quite steep. I scheduled myself the equivalent of two working days just to learn what I was doing, and I feel that it wasn’t enough. I spent 6 total working days (about 48 hours) using the facilities at the SA, and another 10 hours on my own watching tutorials and struggling to get my models online. With nearly 60 hours total, I only came out with two 3D PDF models, one of which isn’t textured, and one textured .PLY file. Even now that I have learned the skills to work the hardware and software, it is still a slow process. I estimate that making a complete model (scanning, processing, texturing in ZBrush, and exporting into a 3D PDF) would take me at least 4 hours at the very minimum for a small, relatively simple artefact. Using texture through FlexScan 3D and exporting .PLY files would be significantly quicker, though still likely 2-3 hours per artefact. A project that consists of one person scanning dozens or hundreds of artefacts could take months and would be incredibly expensive if the archaeologist were paying to use scanning facilities.

Finally, there are some issues associated with making 3D models available online. 3D PDFs are the most versatile way to share models right now, but they can be difficult to embed into websites. This step isn’t strictly necessary, but embedding 3D objects into websites still has a certain appeal when making archaeology sexy and accessible. Hosting 3D PDF or .PLY files on free hosting sites has worked well in this project, though I would question the sustainability of using a quick and dirty, free program in a large, long term project. I foresee major issues in the case of the hosting site going down, or having restrictions of file sizes. My .PLY files were nearly too large for the cap of 10 MB, and they were not large or complex models. The other issue is that in order to make 3D PDFs, the original models must be significantly reduced in their overall quality. Given these issues, it might be more prudent to look for methods that are less costly and time consuming to create models to go online.


One new way that archaeologists have found cheaper and easier 3D technology is through Autodesk123D Catch. In particular, Matthew Betts at the Canadian Museum of Civilization has had success in digitizing archaeological sites and materials using 123D Catch. This is a free and relatively easy way to make 3D models using just photographs. Even better, it can be used as a mobile app for on-the-spot engagement and collaboration with communities. Additionally, the program can really easily upload a video of a 3D model to Youtube, which solves some of the problems of making 3D materials available online.

I’ve been playing around with this software over the last few weeks, but I haven’t really been successful yet. It’s a fairly simple program, but there is certainly still a learning curve. I haven’t given up, but I also don’t have anything to show for my trial and error on 123D Catch on this blog post.

My failed attempt #4 at 123D Catch. 

Final Thoughts

To conclude this long, long post, I really feel that 3D technology can realistically have a place within community archaeology projects. It’s not a task to be taken lightly due to the steep learning curve and the significant investments in time and resources needed to follow through on a project. Given these challenges, free programs like 123D Catch are likely a more realistic option for the average archaeologist. Though the learning curves are a serious challenge, I should reiterate again that I started this project with no prior experience. New hardware and software is simple enough to use that a background in programming or animation isn't necessary to take on a project like this (though it would certainly help).

I was happy for this opportunity to try out some cool technology, and overall I ended up having a good time doing it. I think this technology has great potential for archaeology, in research or community and public engagement, and I hope to be able to explore more of the applications (and potential issues) of using 3D technology in archaeology in the future.


Note: Artefacts recovered from Aulavik National Park, NWT:
Hodgetts, Lisa, Edward Eastaugh, and Jenna Coutinho
2013 Report on Report on the artifacts recovered from 130X232, Mercy Bay, Aulavik National   Park, NWT. Report prepared for the Parks Canada Western Arctic Field Unit, Inuvik,  NWT. 

Making 3D Scans of Artefacts for Community and Public Archaeology Part I


Making 3D Scans of Artefacts for Community and Public Archaeology

Part I- Background to 3D Technology in Archaeology

For my final project for our Digital Archaeology and Digital Heritage course, I decided to try my hand at 3D scanning using the facilities at the Sustainable Archaeology Centre in London, Ontario. As an avid fan of cool technology, this seemed like a great and flawless plan. I mean, it’s better than writing a paper, right? Well, it’s been a good experience overall. It was sometimes frustrating. Some days, after sitting in front of a computer screen for hours trying to learn complex software, I ended up ripping my hair out and asking why I didn't just write a damn paper. But through the process I learned some good skills that I’ll (hopefully) be putting to use later for my thesis project. Since my journey was lot of trial and error (and more error), I’m making this blog post to run through my methods. I want to be as transparent as possible so that others can use my methods (or not use them) if doing a similar project in the future. This blog post (Part I), will give a background on 3D scanning in  archaeology and community archaeology, and Part II will give a more detailed overview of my methods for creating digital 3D scans and some of my results.


A preview of my 3D model of a bifacial knife

In recent years, many archaeologists have begun to use 3D technology within their research and public dissemination. In research, 3D scans provide detailed representations of artefacts that can be replicated and manipulated from anywhere in the world. Researchers can “handle” artefacts, making observations and measurements without physically travelling to a collection. 3D scans, after completion, also have the potential to save a lot of time for curation staff and researchers working within curation facilities. These are some of the main ideas at work in the Sustainable Archaeology Centre, which is currently working its way up to providing these services to Ontario archaeology. The Virginia Commonwealth University has a similar project in the works.

For this project, I was more interested in how 3D models of artefacts could be useful in community and public archaeology, or archaeology by the people for the people. As it is, several archaeologists have started to use 3D scans to provide a resource to descendant communities, local communities, and the general public. It’s probably no surprise to anyone reading this that ownership of the past is a major political issue that plays out through archaeological remains. Some scholars have tried to use 3D technology to digitally repatriate communities with artefacts that were removed from a community or a country in past decades and centuries and have not been physically repatriated. A good large-scale example is the Parthenon Project, in which sculptures that were removed from the archaeological site in Athens in the 1800s and have not been (and likely will not ever be) physically repatriated were digitally reunited with the Parthenon in a short CG animated film using 3D scanning.

This same concept can play out with smaller communities too. If there is a conflict of interest between archaeologists and communities over the rightful ownership or stewardship of an artifact, 3D scans may be of use. Archaeologists can create a keep a digital copy of the artifact while the original is kept in the community. Similarly, items that cannot be physically repatriated can be copied and those copies can be given to a community. And 3D models can be recreated in 3D printers too, allowing for physical replicas of artefacts in addition to digital copies.

And let's face it, 3D models are pretty cool. As archaeologists, we might be interested in lists of calliper measurements and typological classifications of notch widths on projectile points, but these don't necessarily translate to the interests of other people. 3D models made available online can allow people to interact with archaeological materials however they like, coming up with their own ideas and interpretations of the artefacts. This has great potential for collaborative research. Mobile 3D devices and apps, such as Autodesk 123D Catch can even allow archaeologists to make 3D models while in the field with communities. And did I mention how cool 3D printers are? There are so many possibilities with 3D models that are only in their beginnings in archaeology.

This is not to imply that 3D scans are a miracle fix-all to the issues present in archaeology. In reality, there are many new issues that come up. Are digital copies the same as originals? Are 3D copies subject to copyright? Who has the right to view and use these copies? Is digital repatriation the same as physical repatriation to a descendant or local community? As it turns out, there is a lot of debate surrounding these questions and, not surprisingly, Western philosophies regarding objects and copies can differ wildly from Indigenous philosophies. These are big questions, and I certainly won’t be able to engage with them in this blog post. But they were on my mind while I was doing this work, and I hope they’re in mind for others as well. 3D technology is cool and all, but it’s certainly not void of its own issues in heritage and archaeology.

Additionally, 3D scanning isn’t really all that accessible to the average archaeologist. This is starting to change as technology becomes more affordable and as software becomes more user friendly. As it is, though, the hardware and software needed for scanning are costly, and there is a steep learning curve involved in using them. New questions start to arise with this technology- is 3D scanning worth the costs in time, labour, and money? Is this something that is actually realistic for a community archaeology project? Can scans be created in a reasonable amount of time, and then can they be made accessible to the general public? These are more of the questions that I hope to approach with this project. I certainly don’t have all of the answers, but I feel like I’ve come across some good insights about 3D scanning in archaeology through this project.

Make sure to check out Part II of this post to see more cool things!