After selecting the technical components and doing some research into the form of the sculpture, the next step is to start building the sculpture.
Keeping in mind the MVP principles that I set out at the beginning of the project, the first step was to duplicate some static demonstrations of A-Frame and publish them to the web.
As I’ve already started a GitHub repository to be able to share my code research and the eventual final project, it made sense to use GitHub Pages to be able to create static webpages to begin my experimentation with A-Frame.
Following the Github Page tutorial for creating a Project Site from scratch, I created a new file called ‘index.html’, and copied the index.html code from the A-Frame Boilerplate project, and then enabled the GitHub pages option in the settings of the project.
Now that I have a demo file going, the next step is to get a static landscape building in A-Frame. I have previously selected two components that looked like they would be a good fit:
Starting with the HeightGrid component I had to create more than one GitHub page, so made the root of my GitHub Pages the docs/ folder, and made a new index.html file as well as an attempt at the basic HeightGrid demo.
After downloading both complete projects, I copied them into my docs/ folder. You can view the list of demos here. Unfortunately, both the HeightGrid demos are currently broke, but all the Terrain Model Component demos work fine:
As well as working on the technical details of how I’m going to make a piece of Art as big as India, I’ve been doing research on what the form of the augmented sculpture should be, as well as thinking about how users should interact with it.
I love the idea of being able to see the land below our feet above our heads – especially in urban areas where the topology of the land is often obscured by the built environment. This is something that I think about in London all the time – especially compared to where I grew up in Wales.
I’m interested in the sculpture being a time based metaphor for humanity’s effect on the world around it – pushing and pulling it away from its original form. I want to start with the sculpture being an exact copy of the land below it, but the act of interaction and observation changing it in real time.
Should the sculpture resemble a crystalline structure (fixed, hard, unmoving) or a material (billowing, supple, constantly flowing)? Jon Harris suggested looking at Khadi, both a material and a movement.
I wanted to see what other people had been making around making sculpture, especially on larger scales.
The most recent realised project by Christo was titled “The Floating Piers“:
I was also excited to discover their current in progress project, “Over the River“:
I found the following quote particularly interesting:
What was involved for Christo and Jeanne-Claude in their multifarious temporary wrappings was the physical experience of enveloping, protecting and caring, – as they themselves put it – “the quality of love and care that humans show for things which are not made for eternity.”
I want to be able to enable users to be able to express their care for India in a similar way, but in an augmented rather than purely physical space. The success of the project will rest upon if I can engage users as successfully as Christo.
Desire Lines: The public art of Tess Jaray catalogues the public installations of the painter and printmaker, Tess Jaray. These works in brick and stone are on a different scale to her studio-based output, but are concerned with the same things – repetition, balance and order – or the disruption of those things. Her use of materials is something that I want to echo in this project – perhaps I could use the geology of India as a starting point for a colouring or textural scheme?
I also found the title of the book inspirational. Once you know what a Desire Line (or path) is, you can’t help finding them everywhere. I hope they will emerge in my sculpture too.
Atlas of Novel Tectonics is by Reiser + Umemoto, a design practice based in New York. It’s a beautifuly designed book, with inserted colour prints that fold away elegantly to reveal captions. The written content is equally elegant, with several concepts or definitions jumping out at me as being particularly relevant to this project.
In the foreword by Sanford Kwinter the philosophical interest around diagrams – both external and internal ones. I’m hoping that users will create many local diagrams of both kinds with this project, starting from the initial geological facsimile.
Their idea of Fineness:
Fineness breaks down the gross fabric of buildings into finer and finer parts such that it can register small differences while maintaining an overall coherence. The fineness argument is encapsulated in the denstities of a sponge: too fine and it acts like a homogenous solid; too course and it becomes constrained by its members.
The Fineness of this sculpture is going to be critical aesthetically and technically.
Intensive and Extensive differences, drawing from a quote from Manuel DeLanda’s “Intensive Sciences and Virtual Philosophy”:
If we divide a volume of matter into two equal halves we end up with two volumes, each half the extent of the original one. Intensive properties on the other hand are properties such as temperature or pressure, which cannot be divided.
This sculpture will clearly have extensive properties of area and volume, but what are it’s intensive (or gradient) properties? Colour? Pressure? Density? Speed? Elasticity? Duration?
Classical Body/Impersonal Individuation. In this part of the book, the authors use the example of a skateboard ramp to rail against Anthropocentrism:
[A skateboard ramp] is an intervening technology that belongs to a totally different pattern of order upon which the human works. The ramp augments the body; it is an extension of the body via the vehicle of the skate, but it does not represent it.
Such an extension of performance belongs to a larger class of singularities know as impersonal individuations. Like the sunset or a time of day, these intense and unique conditions emerge out of the material world. They have manifold meanings projected onto them, but they are not the product of meaning.
I want this sculpture to be like the sunset in this way.
Matter/force relationships – here the authors discuss how to make the relationship between matter and force visible in varying scales. As part of this, they discuss using Voronoi patterns to express forces on a variety of scales:
from Structure to Space to Program.
This sparked a thought in my mind, that I could make the interaction around the sculpture such that each time someone touches it makes a new singularity – however, this would be incompatible with the initial landscape – which is based around a cartesian grid of heights.
I still have the second half of the Atlas to read, but that will have to wait for a later post.
The tutorials seem like a logical place to start, so lets begin with “Hello p5.js”.
The video is super fun! Lauren and Dan make a great team. Most exciting of all is that the video is interactive – allowing you to click and interact with the tutorial as it is playing. Starting with shape drawing and quickly moving onto flocking behaviours as well as connecting to web services (such as the wind direction in New York) to control those said flocking circles the tutorial gives you a great quick overview of the platform – including a great sound generation via mouse demo.
Being open, the source code for the interactive video is even available here.
The next on the was “Get Started”. The tutorial starts with an instruction to download p5.js complete. After doing that I added the files to my Git repository for the project in a folder called “p5_js_GetStarted” and pushed them to GitHub using the following commands:
git pull
git add .
git commit -m "Adding p5.js Get Started tutorial"
git push
After some circle drawing code, I added the following code to add interaction:
function setup() {
createCanvas(640, 480);
}
function draw() {
if (mouseIsPressed) {
fill(0);
} else {
fill(255);
}
ellipse(mouseX, mouseY, 80, 80);
}
This tutorial was hosted on GitHub and deals with a basic “Hello World” program, creating a canvas to draw upon, drawing into different HTML containers, working with native HTML5 canvas functionality, mouse and touch interaction, asynchronous calls and file loading – where my code started to break. After doing some digging I realised that I needed to run a local server in order to load my lovely “cat.jpg” file.
As I knew I’d be using it for another project, I decided to try Node.js. After some more digging, I realised that it would be best to install it via Homebrew, which I already had on my computer. As I’ve just updated to OS 10.12 aka “macOS Sierra” I had to do some updating of Homebrew, and found a nasty issue. After following the proscribed fix (which won’t happen if you just do a new install of Homebrew), I was ready to install node.js, by typing:
brew install node
In my terminal. I could then run the following two commands to verify that everything had installed correctly:
node -v
npm -v
The versions that I had installed were 6.7.0 and 3.10.7 respectively. NPM stands for the Node Package Manager – NPM helps you with installing other Node.js code to your local filesystem, or wherever you happen to have Node.js installed.
After installing Node http-server, I ran it in my HelloWorld folder, pointed my web browser to http://localhost:8080/ and saw:
Hurrah! I uploaded it to my GitHub Pages account, so you can try it for yourself.
After trying making a Loading Screen and Instantiation / namespace experiments, I’m ready to move on to the next tutorial, “p5.js and Processing“.
I’ve been looking for a framework that will allow me to start prototyping this project – trying to get to a Minimal Viable Product as soon as possible.
The above image illustrates the concept well. Henrik Kniberg (the creator of the above image) goes into more detail about the idea in this blog post.
Quoting from the original proposal for the project:
The aim of this project is to create a digital sculpture as big as India itself, accessible from anywhere in India by anyone with a smartphone – as well as people outside of India via the web.
It’s therefore essential to me that this work is viewable by as many people as possible. The web seems the logical place to do that! In addition, as the sculpture is three-dimensional, I know that I need a framework that can render that kind of information via websites. It turns out that things have moved on dramatically since the days of VRML, with three.js the de-facto library for displaying with 3D content in a standards compliant way.
Virtual Reality (VR) is a very hot topic of research and development at the moment, and with that in mind I wanted to make sure that whatever system I developed to display the sculpture would be compatible with as many VR platforms as possible. A-Frame is the perfect framework to do that:
It’s built on top of three.js
It’s compatible with not only the latest VR headsets, but also lower cost platforms such as Google Cardboard – and even degrades gracefully if someone views A-Frame content without a VR headset (such as via a desktop computer or mobile phone)
Digging deeper into A-Frame Frequently Asked Questions (FAQ) led me to discover that it uses the entity component system pattern – something that I had my attention drawn to some time ago from my brother, as well as via the Unity platform. As the FAQ states:
A-Frame is based on an entity-component-system pattern (ECS), a pattern common in game development that emphasizes composability over inheritance:
An entity is a general-purpose object that inherently does and renders nothing.
A component is a reusable module that is plugged into entities in order to provide appearance, behavior, and/or functionality. They are plug-and-play for objects.
A system provides global scope, services, and management to classes of components.
ECS lets us build complex entities with rich behavior by plugging different reusable components into the sockets on the entity. Contrast this to traditional inheritance where if we want to extend an object, we would have to manually create a new class to do so.
ECS grants developers the key to permissionless innovation. Developers can write, share, and plug in components that extend new features or iterate upon existing features.
Users in the same physical location will be able to see other people’s interactions with the surface from the same location – they’d be able to raise or lower their part of the surface in real time by simply clicking their mouse or touching their mobile phone screen – making a new digital landscape above the physical geography of India.
I realised I already had the answer – the topology of India itself would be the starting point for the sculpture, which would then be altered by the actions of users in real-time.
After finding an open source Javascript 3D world viewer and re-visiting the brilliant OpenStreetMap, I realised that it was important to stick to the MVP principle and see what was available in the existing A-Frame/three.js ecosystem.
One of the best things about three.js is it’s quantity of examples – allowing users to quickly see the possibilities of the platform. Whilst browsing the examples, I found this “Geometry Terrain” and this “Dynamic Terrain” that reminded me of my original proposal, but inverted – i.e. in the skies above.
There are a varietyofarticles on how to create three.js visualisations of real world landscapes, but was there anything up to date and available for A-Frame?
VR is all very well, but this project requires something different – Augmented Reality (AR). AR has gathered mainstream attention recently with the release of Pokémon Go as well as attention in the world of tech startups – especially around Magic Leap.
Creating mobile AR with A-Frame would require the combination of live video from a mobile phone camera with three-dimensional imagery super imposed on top. At luck would have it, whilst being “lazy like a fox” and browsing through the A-Frame blog led me to discover this project, by Dietrich Ayala:
Unfortunately, the example is only compatible with Firefox on Android, but it’s certainly a great place to start.
This got me to thinking not only about the number of mobile phone users in India, but also how many of those use smart phones and how many of those users were using Android.
There are 1,034,253,328 mobile phone users in India, with 29.8% owning smartphones and 97% of those using Android. I make that a potential audience of 298,961,267! Naturally, not all of those will meet the minimum hardware requirements for A-Frame.
With the choice made on the front end for the MVP, I started to thing about the backend of the system. My priority was a backend system that could had a thriving community, would allow real-time interaction and would be able to deal with Geospatial data in a sensible way.
I starting thinking about making a system using Python that would allow for real-time interactions:
The idea was to use WebSockets to update viewers landscapes as other users interacted with them.
After taking some advice from Szymon, Igor and Patricio, I decided to plump an MVP built-in Node.js as it was the system that was already being used for several A-Frame components and had a thriving community around it. Both Patricio and Szymon recommended that I use PostGIS as a database platform as it was designed to deal with Geospatial data and relations within the database engine.
In summary, the first pass at this system will consist of the following platforms and components:
I’ve started work on Reactickles 3 with Wendy Keay-Bright. Here is a film of a previous version:
Reactickles 1 and 2 were interactive applications written in Director that were combined with a kit of paper based activities and used around the world. I’m going to convert the old software to the web using p5.js and as yet unselected backend. I’ve created a Github for the project here.
Our aim is to use the web to make the software easier to share with others and also easier for people to share their experiences with it with others.
I’ll be working on this first stage of the project in the open over the next month and a half or so, before going for testing with audiences in India. I’ll then find out if it’s been selected to go to completion in 2017.
I’m very excited about documenting the whole process here on my website as well as sharing all my code on the Github for the project.
The title of the project is “A piece of Art as big as India”. It’s an augmented reality sculpture that will be as large as the subcontinent itself – perhaps even larger. It will be accessible via the WWW, SMS and mobile devices. Here is an early visualisation of how the sculpture might look:
I was listening to the Interstellar soundtrack recently – great work from Roger Sayer – who is resident at the Temple Church in London. Watch the making of on the DVD if you can – great characterisation of the breath of all those artificial voices coming together and some really proper bass.
James Bentley of Hellicar&Lewis and I are planning to attend the upcoming Kinecthack London event.
We’ll be working on an upcoming open source installation for the Circulate project – “Remembering The Future”. We’ll be challenging young people from five areas of London to come up with costumes and architecture of the future.
We are planning to build an app that allows for Kinect V2 skeletons to be augmented with 3D content in real time – and we need help to make it happen! This app will then be installed in five locations over the summer of 2015, with workshops to make content proceeding them.
As mentioned before, the project will be completely open sourced – our aim is to make a complete workflow from 3D asset creation to augmenting the skeleton and fixed background in real time. We’ll be using openFrameworks throughout.