Interpreting a virtual reality art history experience for blind and partially-sighted users
Virtual Reality (VR) and other 3D digital environments are firmly back in fashion for the museum world — the latest wave of the technology has produced a variety of creative responses from the cultural heritage sector and its suppliers. Institutions have often used the technology as a new interpretation medium, to deliver explanations and narratives in an immersive environment. Others have used VR to go “beyond the walls” of their buildings (just as they have with previous examples of emerging media technologies), with a plethora of museum tours or digitally reproduced historic buildings made available for virtual visits. A few organisations have made use of the technology as a means to explore a form of reconstructive archaeology — recreating buildings or situations using 3D modelling tools to engage with some of the material and aesthetic choices of the original construction. A good example of this is the VR reconstruction of an Iroquois longhouse by the Museum of Ontario Archaeology (MOA), informed by the museum’s archaeological evidence and ethno-historic records, in collaboration with a PhD researcher from Western University.
In a similar vein, our company, the museum-specialist digital media development agency Surface Impression, was commissioned by the Royal Collection Trust in London, UK to create a virtual reconstruction of the private art galleries of the notorious Stuart-era king of England, Scotland and Ireland, Charles I (1600-1649). Charles was an enthusiastic collector of Renaissance art and purchased a huge number of works, including those by artists such as Titian, Raphael, Leonardo da Vinci, Anthony van Dyck and Rembrandt van Rijn. The works that Charles owned were distributed among the many royal palaces, but he took the unusual step (for the time) of creating a series of dedicated spaces at the Palace of Whitehall in London — “privy” (private) galleries purely for the enjoyment of the art. In these, the “star” works were to be found, including a room entirely populated by paintings by Titian, one of Charles’ favourite artists.
At the end of the second English Civil War (1648-1649), Charles I was beheaded and his collection broken up and sold off. Following the restoration of the monarchy in 1660, his son Charles II tried to recover the collection, but many works were beyond his reach at that point in time. The Palace of Whitehall also came to a bad end itself: most of the complex burnt down in 1698.
There are two essential documents that provide evidence about the collection, both inventories of works. The first, the Van der Doort Inventory, was a catalogue produced by Abraham Van der Doort, a Dutchman commissioned to look after the King’s collection. The second, known as the Sale Inventory, was essentially a fire sale catalogue, drawn up after the King’s execution in order to facilitate the liquidation of assets. The contents of these inventories provide details on the titles, artists, measurements, locations and contents of the art and have been a crucial historical source for art historians for many decades. Royal Collection Trust curator, Niko Munz, had been working through the inventories for years, compiling them into spreadsheets to be used for data sources and locating those art pieces that survive to this day, distributed around the world (some works found their way back to the Royal Collection, others are in institutions such as the Louvre, Prado and Hermitage as well as private collections).
Combining this data with archaeological studies of the Palace of Whitehall and evidence from contemporary Tudor and Stuart period great houses, we were able to create virtual reconstructions of three key Privy Rooms displaying the grandest works in Charles’ collection.
Recreating the rooms became more like real-world gallery design and exhibition planning as the project went on. The inventories informed the order and sometimes the position of paintings relative to doors and windows, and archaeology provided dimensions of the rooms. However, many decisions came down to the interpretation of the project’s creative practitioners — the height of hanging of a painting, the position of a chimney breast relative to the roof’s ridge, the colour of wood in the wall panelling were all subjective decisions. The works, however, are mostly still in existence and could be relied upon for dimensions, content and sometimes even framing. The VR project brought the pieces back together for the first time in over 350 years.
Throughout the development process of the project, although excited by the subject matter and the use of the technology, we were mindful of the accessibility barriers posed by VR. In the 17 years since Surface Impression was founded, we have been active proponents of digital accessibility for culture and heritage, and have accumulated a reputation for expertise in the field — especially through our work with disability charities and disability arts practitioners. VR poses an interesting challenge for accessibility, on the one hand it is a way to bring experiences to people who might not be able to physically visit a location, but on the other it is a medium that presents barriers to those who have mobility impairments and especially to those who are blind or visually impaired. We decided to undertake a small piece of research and development work and investigate how the Charles I “lost collection” could be made more accessible to people with visual impairments. An evaluation of the Iroquois Longhouse VR, kindly facilitated by the MOA in London, Ontario, helped to generate further insights into access requirements.
To assist with the research and development of the project, we engaged VocalEyes, a London-based charity that provides audio description for blind and partially-sighted people. Their main work is providing live audio description at theatrical performances throughout the UK, but in recent years they have branched out into audio describing museum tours and providing pre-recorded material for handheld visitor guides and apps.
Taking one of the virtual rooms, and working with the Royal Collection Trust curator, a VocalEyes audio describer interpreted both the spatial arrangement of the Privy Gallery and the paintings on display. In recordings of about a minute per painting, the description encompasses both visual aspects and interpretation in a rich, engaging style. For example Titian’s “St. Margaret” is described as follows:
“St. Margaret, in a knee-length mint-green dress with a scooped neck over a fine white chemise, runs towards us through a rocky landscape, both arms swinging out to the left. Her fair-skinned left leg stretches forwards. Her unseen right leg is presumably directly behind as she steps over a defeated lizard-like dragon that curls across the bottom of the picture. [...]
In the bottom right hand corner lies the top half of a human skeleton. The top left quarter of the painting shows a black cloud over a city across an expanse of murky water. A tall campanile suggests Venice. This realism is in contrast to the atmospheric swirl of cloud that threatens to engulf it. Much as St. Margaret’s finely-painted face — suffused with concern and horror — and the coiled tresses of her chestnut hair and the fine drapery of her dress contrast with the more roughly rendered beast she’s overcome.”
As a delivery mechanism for the 3D environment, we chose SketchFab — an online platform created for sharing 3D models in much the same way as YouTube was created for video sharing. SketchFab has an accomplished range of tools and, most importantly, works on most devices, from mobile phones to desktop computers. As part of our experiment, we wanted to use a platform that smaller museums could conceivably also use for 3D projects. However, applying the audio to the 3D environment of the Privy Gallery was challenging — SketchFab supports audio, but triggering playback in a way that was appropriate to the subject matter and audio description was impossible. To deal with this, we made use of SketchFab’s application programming interface (API) to code an alternative method that would trigger audio for each painting (plus the room itself), and we also added some high contrast, large buttons to make navigation through the room easier.
To iterate development of our product, we tested the reconstruction with partially sighted participants who were members of Blatchington Court Trust, a local charity for blind and visually impaired people. Our testers were very engaged — trying out the Google Cardboard headset enthusiastically and quickly feeding back positives and areas that needed improvement. Through their help, we were able to rapidly improve navigation, but also learnt that impaired sight does not mean impaired engagement — our testers were not art aficionados by any means, but they all were very interested in the experience and wanted to go further than what was on offer.
Based on this project’s experience, we are now able to refine both the content and the technical implementation of audio description in VR environments, and we’re keen to undertake work to improve other aspects of accessibility in the medium (for example, for those who have mobility impairments). We’re setting up, and seeking, new projects with which to roll out these developments to museum audiences — particularly in Canada now that Surface Impression has established an office in Toronto. Above all, the experience has taught us that blind and partially sighted users of VR are, given the opportunity, as enthusiastic an audience as any other. One more reason to create technology and media that is inclusive for everybody.
This museological report has been made possible through funding from the Government of Canada. This report was also published in Muse Magazine, September/October issue, 2018