This page begins with an image created using AI, to explore unseeable phenomena of the universe such as the first few moments after the Big Bang and Calabi-Yau manifolds. These images are intended as an intuitive visual guides to comprehending some of what is being written about by scientists.
In other projects on this page, I have taken science data images, scanned them to audio, and made a soundtrack. The soundtrack is then visualised using video animation software. What is revealed are unique views and sounds of the universe in which we live.
Data was used in The Park Speaks and Wai where live environmental data was used to determine the audio heard in the museum or gallery. Positional data was used in Haiku Robots as live input to generate word lists dynamically.

This image is based on experiments with prisms in the studio. We need prisms so we can see the spectrum, all of which can be expressed as data. This makes me wonder what this might sound like…
First light of the universe – intuitive AI image

First light
So if it was possible to go back to the birth of the universe, what would it look like? Maybe like this? The bright yellow bit in the middle is perhaps comparable to the period of expansion, here envisioned in white and yellow light. Much of the matter is undifferentiated, and the imagery enticing.
First light with dark energy – intuitive AI image

First light with dark energy
The idea with this image is to go back to the time of the first light in the universe, with dark energy visualised. I’m hoping you see what I mean about generating images it’s just not possible to see, and how this can assist our intuitive sense. What might dark energy look like, after all? And how might it interact with ordinary matter? That’s a very hard question to answer. In this speculative image, the dark energy forms the loose structure of a spiral. Thinking and feeling that dark energy interacts with our lives, is common to Moana (Polynesian) peoples and a part of my cosmology.
Calabi-Yau manifold – intuitive AI image

String Theory
String Theory requires a ten-dimensional universe, four of which are the spacetime (x, y, z and time) we inhabit. The other six are rolled up extremely tightly and are well beyond what we can ever see, with Oxford University suggesting the manifolds are at the size of the Planck level. Calabi-Yau manifolds are said to collapse the remaining six dimensions. Given these manifolds will never be seen, they make a good candidate for visual and intuitive exploration using AI. Above you see an image I have selected, from many different visual appearances, which strikes a connective visual chord. Here is a link to a technical video, for those technically inclined.
Please note that String Theory is not the only theory of the fundamental nature of the Universe (i.e. the Theory of Everything as it is called). Another is Lambda CDM, although both utilise CPT symmetry and while standard String Theory has dimensions six to ten rolled into a manifold, Lambda CDM involves 36 zero dimension fields. To navigate the complex world of competing research centres, I leave the controversies to the researchers and focus on what both sides might be saying – at least what is agreed upon. In this instance that means CPT symmetry and more than four dimensions or fields.
Art-science: Light Seen and Unseen
Gamma rays, infrared to ultraviolet wavelengths
This video is produced by collecting data images from science experiments – light from gamma rays (which cannot be seen) to infrared wavelengths, visible light, to ultraviolet wavelengths. The images are then converted to sound and a soundtrack track is made, which I convert into animated video. This work was a finalist in the Lumen Awards of 2020 after being selected for exhibition in Gwangju Korea at ISEA 2019. It took hours and hours to create, and also to have a sense of rise to crescendo while keeping within the subject of light seen and unseen.
Art Science: Pion Decay
Before the Large Hadron Collider, there were Bubble Chambers
This work is based on imagery produced as part of quantum experiments. They provide a window into the quantum world, which I then converted to audio. Before the era of colliders such as the Large Hardon Collider, Bubble Chambers were used. This involved running the experiment and taking images using a photographic plate on the bottom of the Chamber. The main image used here is from Pion decay study at CERN with the core image taken by Serge Dallier. Audio from the planet Saturn and the Sun has been mixed with sound derived from the quantum experiment. This work was selected for exhibition in Florence at the Diffrazioni Festival.
Data: Haiku Robots

Live generated word lists made by robotic cars
Commissioned by Puke Ariki Museum, Haiku robots was an interrogation of the idea that language might be the result of emergence from within interconnected or integrated systems. These systems can be applied from the context of indigenous belief systems to nature and even business. Collaborators included Daryl Eggar, Julian Priest and Andrew Hornblow. The small-scale interconnected system consisted of two autonomous robotic cars inside an area marked out by eight cylinders and a project computer. One system was the robotic cars; a second was the electronic fence; and the third system was the project computer and customised phone software which converted numbers into words.
The robots were very basic: they could go straight ahead, reverse, turn left or turn right. If they detected an infrared pulse, they stopped. Each robot had an infrared LED that can both send and receive infrared pulses. This prevented them from colliding with each other. The electronic fence was marked out by eight pillars. Each pillar had a ring of infrared LEDs at the base, set to send only. If a robotic car comes close to a pillar, it detects the electronic fence and stops. If the LED pulse signal is strong enough, the number associated with that pillar is sent to the project computer. If the signal is weak, the robot just backs up and either turns left or right.
The reason for the eight pillars each with their own assigned number can be found in your mobile phone touchpad. 2 = ABC, 3 = DEF, 4 = GHI and so on. Consequently, over time, a string of numbers is sent to the project computer via radio. Customised open-source phone software then converts the string of numbers into letters. Letter by letter, words are created, generating word lists live and dynamically. The output was then checked for islands of coherence in a list of emergent terms. Based on the words being strictly in the order of emergence, we received “red is my ace bird” which at least has a correct sentence structure; the vaguely philosophical “god hugs yes fern,” a somewhat poetic “cry owl so scare yeah,” and the stupefying perhaps humourous “no hash blimp end fly our joys oxide ha” as outputs. Having commenced with a text project, the next work utilising a small-scale interconnected system moved to spoken word.
Data: The Park Speaks

Live environmental data generates audio in the gallery
Also commissioned by Puke Ariki Museum, The Park Speaks utilised a system built by Andrew Hornblow, Julian Priest and Adrian Soundy. This meant that live data readings from the environment went up to the project website and controlled which of 140 audio files were heard in the installation in the museum. For The Park Speaks, what was heard was phrases spoken by an automated voice.
One data source was a people counter, and the aligned sentences referred to various states of numbers walking past the counter from “I’m feeling alone” when numbers are low, to “lots of children and families in the park today” when numbers were high. UV values resulted in sentences expressing varying states of sunlight and dark. Temperature values had similar related expressions. This provided a continuously changing diet of phrases.
The system was used for several projects, including Wai in Albuquerque (where the sound was half made by Darren Robert Terama Ward and half by Dineh/Navajo musician Andrew Thomas; The River Speaks and Kauri Flow which used sounds made by a wide range of Indigenous collaborators, curated by Nina Czegledy.
