Two of my ongoing interests have been art-science, based on the use of data from experiments. Data (like knowledge) in my view is dimensional. There are mutliple ways data can be be looked at: it can be a number, an image or a sound.
Data was used in The Park Speaks and Wai where live environmental data was used to determine the audio heard in the gallery. Position data was used in Haiku Robots as live input to generating dynamic word lists.
This page begins with projects where I have taken science data images, scanned them to audio, and made a sound track. The sound track is then visualised using video animation software. What is revealed are unique views and sounds of the universe in which we live.

This image is based on experiments with prisms in the studio. We need prisms so we can see the spectrum, all of which can be expressed as data. Which makes me wonder what this might sound like…
Art-science: Carbon nanotubes
This video is the result of an awareness of Indigenous, scientific, cosmological and envirtonmental aspects and how these might interact in a single work. Thanks to Darren Robert Terama Ward (putorino tane), Dynamicell (fire), Ayen Deng of the Nanotechnology Institute Texas (carbon nanotube experimental data image), NASA and ESA (comet images). The data images are converted to sound, then a soundtrack is compiled. The soundtrack is then converted to video animation. And voila, this is the result.
Art-science: DNA Whakapapa
This video is based on my DNA, which was converted into a soundscape by Josia Jordan. His soundscape I then converted into animated video. This work was selected for exhibition in Montreal at ISEA 2020.
Art-science: Moonlight
“Moonlight” involved a high resolution image of a painting on the theme of phases of the moon by a Māori artist WharehokaSmith, who has recently risen to prominence in New Zealand. The relationship of the sun, the moon and Earth is of particular interest and concern to Indigenous peoples and WharehokaSmith provided an abstract image where the phases are discernible in a way that is also reminiscent of traditional kowhaiwhai (painted rafter decoration) forms. The painting was the core visual data used to generate the soundtrack. The soundtrack was then used to generate visual animations. This work was selected for exhibition in Gwangju Korea at ISEA
Art Science: Pion Decay
This work is based on imagery produced as part of quantum experiments. They provide a window into the quantum world, which I then converted to audio. Before the era of colliders such as the Large Hardon Collider, Bubble Chambers were used. This involved running the experiment and taking images using a photographic plate on the bottom of the Chamber. The main image used here is from Pion decay study. Audio from the planet Saturn and the Sun has been mixed with sound derived from the quantum experiment. This work was selected for exhibition in Florence in the Diffrazione Festival.
Data: Haiku Robots

Commissioned by Puke Ariki Museum, Haiku robots was an interrogation of the idea that language might be the result of emergence from within interconnected systems. Collaborators included Daryl Eggar, Julian Priest and Andrew Hornblow. The small scale interconnected system consisted of two autonomous robotic cars inside an area marked out by eight cylinders and a project computer. One system was the robotic cars; a second was the electronic fence; and the third system was the project computer and customised phone software which converted numbers into words.
The robots were very basic: they could go straight ahead, reverse, turn left or turn right. If they detected an infrared pulse, they stopped. Each robot had an infrared LED that can both send and receive infrared pulses. This prevented them from colliding with each other. The electronic fence was marked out by eight pillars. Each pillar had a ring of infrared LEDs at the base, set to send only. If a robotic car comes close to a pillar, it detects the electronic fence and stops. If the LED pulse signal is strong enough, the number associated with that pillar is sent to the project computer. If the signal is weak, the robot just backs up and either turns left or right.
The reason for the eight pillars each with their own assigned number can be found in your mobile phone touchpad. 2 = ABC, 3 = DEF, 4 = GHI and so on. Consequently over time, a string of numbers is sent to the project computer via radio. Customised open source phone software then converts the string of numbers into letters. Letter by letter, words are created, generating word lists live and dynamically. The output was then checked for islands of coherence in a list of emergent terms. Based on the words being strictly in the order of emergence, we received “red is my ace bird” which at least has a correct sentence structure; the vaguely philosophical “god hugs yes fern,” a somewhat poetic “cry owl so scare yeah,” and the stupefying perhaps humourous “no hash blimp end fly our joys oxide ha” as outputs. Having commenced with a text project, the next work utilising a small scale interconnected system moved to spoken word.
Data: The Park Speaks

Also commissioned by Puke Ariki Museum, The Park Speaks utilised a system built by Andrew Hornblow, Julian Priest and Adrian Soundy. This meant that live data readings from the environment went up to the project website and controlled whioch of 140 audio files were heard in the installation in the museum. For The Park Speaks, what was heard was phrases spoken by an automated voice.
One data source was a people counter, and the aligned sentences referred to various states of numbers walking past the counter from “I’m feeling alone” when numbers are low, to “lots of children and families in the park today” when numbers were high. UV values resulted in sentences expressing varying states of sunlight and dark. Temperature values had similar related expressions. This provided a continously changing diet of phrases.
The system was used for several projects, including Wai in Albuquerque (where the sound was half made by Darren Robert Terama Ward and half by Dineh/Nvajo musician Andrew Thomas; The River Speaks and Kauri Flow which used sounds made by a wide range of Indigenous collaborators, curated by Nina Czegledy.
