Categories
A piece of Art as big as India

Google Pixel, Making a Screen Recording of Dietrich Ayala’s AR Demo, Google Cardboard partial win

Today my Google Pixel arrived in the post, along with a pair of Google Cardboard. I’ve also got a Google Daydream on order, but it looks like it will arrive too late to be useful for user testing.

First impressions were great, it’s certainly a very capable smartphone. After successfully running the A-Frame AR demo by Dietrich Ayala on Firefox for Android, I wanted to be able to screen record it so that the A-Frame community could see the framework’s performance on Google’s flagship phone.

I downloaded Android File Transfer as well as Android Studio to enable me to connect directly to the phone on my Apple MacBook Pro. Helpfully, Google have provided a command line utility called screencap that allows for direct screen recording. Screencap runs within Android Debug Bridge (ADB) which needs to be enabled in the following way:

To access these settings, open the Developer options in the system Settings. On Android 4.2 and higher, the Developer options screen is hidden by default. To make it visible, go toSettings > About phone and tap Build number seven times. Return to the previous screen to find Developer options at the bottom.

After pressing Build number seven times(!) I was able to enable debugging mode on the new Pixel phone. This tutorial was handy to describe how to actually run ADB on my computer, I had to start a terminal in the following folder: /Users/<user>/Library/Android/sdk/ and then run the following command:

./adb devices

I actually had to run it twice, one to get permission to connect to the Pixel, and once again to actually get it to connect and report the serial number of the Pixel.

After playing around in the command line for some time, I was finding it very difficult to actually save a file to the phone’s local filesystem:

ForceMacbookProRetina:platform-tools joel$ pwd
/Users/joel/Library/Android/sdk/platform-tools
ForceMacbookProRetina:platform-tools joel$ ./adb devices
List of devices attached
FA69Y******* device
ForceMacbookProRetina:platform-tools joel$ ./adb shell screenrecord demo.mp4
Unable to open 'demo.mp4': Read-only file system

After doing some more searching I found a great app called AndroidTool by Morten Just that enabled me to record screen captures (and even convert them to animated Gifs) with just one click. The results can be seen below:

The final step was to try running the demo with Google Cardboard. I’m pleased to say that it worked great in terms of a 3D effect – but unfortunately, Cardboard physically covers the forward facing camera as well as the VR mode disabling the live video. I managed to get Firefox running fullscreen in non VR mode by using an addon, but it would be great to have VR mode not disable the camera.

2016_10_24_firstcardboard

Below is a screenshot of the demo working in Firefox for Android:

2016_11_01_aframearonfirefoxonandroid

But when you enable VR mode by pressing the icon in the bottom right hand corner of the screen, the live camera background is no longer displayed:

2016_11_01_aframearonfirefoxonandroidinvrmodecamerafail

 

Categories
A piece of Art as big as India

Streamlining development of the project with Express, nodemon, Pug, Less, Gulp and Browsersync

In order to be able to develop efficiently, I’ve realised that I need a local web server running on my own computer, instead of having to constantly upload code to my GitHub Pages server.

As I’d already selected node.js as my backend, it made sense to use that on my local machine too. I found The Art of Node by Max Ogden a great introduction to what node.js is and what is it useful for, namely:

Node.js is an open source project designed to help you write JavaScript programs that talk to networks, file systems or other I/O (input/output, reading/writing) sources. That’s it! It is just a simple and stable I/O platform that you are encouraged to build modules on top of.

Quoting further:

Node isn’t either of the following:

  • A web framework (like Rails or Django, though it can be used to make such things)
  • A programming language (it uses JavaScript but node isn’t its own language)

Instead, node is somewhere in the middle. It is:

  • Designed to be simple and therefore relatively easy to understand and use
  • Useful for I/O based programs that need to be fast and/or handle lots of connections

This is exactly what I want to do – I need something simple that is going to be fast and handle lots of connections – potentially up to 300,000,000 at once!

I installed node.js on my laptop via Homebrew.

In order not to have to write HTML and CSS completely manually, I asked my friend Ross Cairns for some tips on what would be useful, and he gave me a rapid tutorial in the following platforms:

  • Express – a web framework for node that enables you to write web applications – which is what I’ll need to enable users to load my sculpture and alter it themselves within a mobile webpage.
  • nodemon – tool that reloads your node server automatically when it detects any changes in your code.
  • Pug – a templating engine for node that enables you to write HTML in a simpler way, without having to worry about closing tags and other complications. I also found a Pug template (formally known at Jade) that used A-Frame, which was very encouraging.
  • Less – a pre-processor for CSS that makes it much easier to use.
  • Gulp – a tool for automation that enables the automatic use of tools like Pug, Less and many others.
  • Browsersync – a tool that automatically reloads your web browser when it detects changes in your source code.

By default, Node.js also installs Node Package Manager (npm) which can be used to install further node programs.

I want to be able to install lots of node programs for this project, and doing it by hand can get unwieldy, so on Ross’s advice, I used the:

npm init

Command in order to create a package.json file in my project directory to list all the node programs I install, in order to make the project easier to manage and share in the future.

After that I installed nodemon and gulp-cli globally:

npm install nodemon --global
npm install gulp-cli --global

And then Express, Pug, Less, Gulp and BrowserSync locally:

npm install express --save
npm install gulp --save
npm install gulp-pug --save
npm install gulp-less --save
npm install browser-sync --save

Then I had to create the most simple Express app possible – a completely static one, by creating an “app.server.js” in the root of my project, with the following content:

// Modules
var express = require('express');

// Express
var app = express();

// our middleware
app.use(express.static('docs')); //Also GitHub Pages root, everything is going to be static to begin with

//Binding to a port...
app.listen(3000, function () {
 console.log('A piece of Art as big as India Express app listening on port 3000.');
});

I could then test the Express app by running the following command:

node app.server.js

and accessing http://localhost:3000 to test my new node server. Everything worked as if I was accessing the GitHub pages I had previously been working with.

In order to have something for Gulp to automate, I then created a .less file and .pug file in a newly created src folder (with less and pug folders within) that would duplicate the Pug template I had found earlier:

body {
   background: white;
}

style.less

doctype html
html
 head
 meta(charset='utf-8')
 title Hello, World! &bull; A-Frame, made via Pug and Less and Gulp
 meta(name='description', content='Hello, World! • A-Frame')
 script(src='https://aframe.io/releases/0.3.2/aframe.min.js')
 link( href="style.css", rel="stylesheet", media="all")
 body
 a-scene
 a-box(position='-1 0.5 -3' rotation='0 45 0' color='#4CC3D9')
 a-sphere(position='0 1.25 -5' radius='1.25' color='#EF2D5E')
 a-cylinder(position='1 0.75 -3' radius='0.5' height='1.5' color='#FFC65D')
 a-plane(position='0 0 -4' rotation='-90 0 0' width='4' height='4' color='#7BC8A4')
 a-sky(color='#ECECEC')

aFrameBoilerPlateGeneratedViaLessAndPug.pug

Now that I had some files to generate from, I could create a gulpfile.js in the root of my project in order to automate the process.

// Modules
var gulp = require('gulp');
var pug = require('gulp-pug');
var less = require('gulp-less');
var browserSync = require('browser-sync').create();

// Tasks
gulp.task('default', ['pug', 'less']);

gulp.task('pug', function(){
 return gulp.src( './src/pug/**/*.pug')
 .pipe( pug( {pretty: true}))
 .pipe( gulp.dest('./docs/'));
});

gulp.task('less', function(){
 return gulp.src( './src/less/**/*.less')
 .pipe( less())
 .pipe( gulp.dest('./docs/'));
});

// Watching
gulp.task('watch', function(){
 browserSync.init({
 port: 4000, //where is browser sync
 proxy: 'http://localhost:3000/', //what are we proxying?
 ui: {port: 4001}, //where is the UI
 browser: [] //empty array of browsers
 });

gulp.watch('./src/pug/**/*.pug', [ 'pug'])
 .on('change', browserSync.reload);

gulp.watch('./src/less/**/*.less', [ 'less'])
 .on('change', browserSync.reload);
});

By running the following commands in two Terminal windows, I can write code locally and see the changes instantaneously in a browser running on my own computer.

nodemon app.server.js
gulp && gulp watch

I’ve pushed all these changes to my GitHub for the project, and you can see the generated html file here, it’s identical to the file I created manually before.

Categories
A piece of Art as big as India

Making a equirectangular panoramic image

Right at the beginning of the project, I requested a panoramic image from the team at the British Council in Delhi, so that I could construct a demonstration app for user testing in India that didn’t require real time camera data. They sent me back the following image:

20160926_150409

I dropped it a simple A-Frame scene, using the Sky component. This was the result:

Panorama fail.
Panorama fail.

You can try the broken demo for yourself here.

It was obvious that there was something wrong with the translation between an iPhone panorama and what the Sky component required, especially at the top and bottom of the panorama. Exploring the documentation around the Sky component further, I found this detail:

In order to be seamless, images should be equirectangular.

After reading Kevin’s 360-degree Photography Guide, I realised I needed to invest in a device that would make it possible for me to take equirectangular images – the iPhone just wasn’t going to cut it. Luckily, the article recommends a great camera, the Ricoh Theta S.

I snapped the following image today:

2016_10_18_firstpanorama

I dropped it into the same A-Frame scene that I used for the Delhi panorama and it worked perfectly:

Panorama win!
Panorama win!

You can try the successful demo for yourself here. I’m going to send the camera out to Delhi soon so that I can capture more usable imagery.

Categories
A piece of Art as big as India

Building a static landscape

After selecting the technical components and doing some research into the form of the sculpture, the next step is to start building the sculpture.

Keeping in mind the MVP principles that I set out at the beginning of the project, the first step was to duplicate some static demonstrations of A-Frame and publish them to the web.

As I’ve already started a GitHub repository to be able to share my code research and the eventual final project, it made sense to use GitHub Pages to be able to create static webpages to begin my experimentation with A-Frame.

Following the Github Page tutorial for creating a Project Site from scratch, I created a new file called ‘index.html’, and copied the index.html code from the A-Frame Boilerplate project, and then enabled the GitHub pages option in the settings of the project.

Success!
Success!

Now that I have a demo file going, the next step is to get a static landscape building in A-Frame. I have previously selected two components that looked like they would be a good fit:

  1. HeightGrid Component by andreasplesch
  2. Terrain Model Component by bryik

Starting with the HeightGrid component I had to create more than one GitHub page, so made the root of my GitHub Pages the docs/ folder, and made a new index.html file as well as an attempt at the basic HeightGrid demo.

After downloading both complete projects, I copied them into my docs/ folder. You can view the list of demos here. Unfortunately, both the HeightGrid demos are currently broke, but all the Terrain Model Component demos work fine:

Olympic Peninsula Terrain Model Success
Olympic Peninsula Terrain Model Success
Categories
A piece of Art as big as India

Khadi, Christo, Tess Jaray and the Atlas of Novel Tectonics

As well as working on the technical details of how I’m going to make a piece of Art as big as India, I’ve been doing research on what the form of the augmented sculpture should be, as well as thinking about how users should interact with it.

Lost Rivers of London by Lorain Rutt
Lost Rivers of London by Lorain Rutt

I love the idea of being able to see the land below our feet above our heads – especially in urban areas where the topology of the land is often obscured by the built environment. This is something that I think about in London all the time – especially compared to where I grew up in Wales.

whereigrewup
Where I grew up in Wales

I’m interested in the sculpture being a time based metaphor for humanity’s effect on the world around it – pushing and pulling it away from its original form. I want to start with the sculpture being an exact copy of the land below it, but the act of interaction and observation changing it in real time.

Should the sculpture resemble a crystalline structure (fixed, hard, unmoving) or a material (billowing, supple, constantly flowing)? Jon Harris suggested looking at Khadi, both a material and a movement.

I wanted to see what other people had been making around making sculpture, especially on larger scales.

On a visit to the Serpentine Gallery‘s branch of Koenig Books I found three interesting references:

  1. Christo – Big Air Package
  2. Desire Lines: The public art of Tess Jaray
  3. Atlas of Novel Tectonics
Drawing 2012 in two parts 96 x 28" and 96 x 42" (244 x 71 cm and 244 x 106.6 cm) Pencil, charcoal, pastel, wax crayon, wash and architectural plans Photo: André Grossmann © 2012 Christo Ref. # 3-2012
Drawing 2012 in two parts
96 x 28″ and 96 x 42″ (244 x 71 cm and 244 x 106.6 cm)
Pencil, charcoal, pastel, wax crayon, wash and architectural plans
Photo: André Grossmann
© 2012 Christo
Big Air Package, Gasometer Oberhausen, Germany, 2010-13 Photo: Wolfgang Volz © 2013 Christo
Big Air Package, Gasometer Oberhausen, Germany, 2010-13
Photo: Wolfgang Volz
© 2013 Christo
Big Air Package, Gasometer Oberhausen, Germany, 2010-13 Photo: Wolfgang Volz © 2013 Christo
Big Air Package, Gasometer Oberhausen, Germany, 2010-13
Photo: Wolfgang Volz
© 2013 Christo

Christo – Big Air Package is both the name of a installation by Christo and the title of a catalogue of projects from 1961-2013. More background information on Christo can be found on Wikipedia.

The most recent realised project by Christo was titled “The Floating Piers“:

The Floating Piers, Lake Iseo, Italy, 2014-16 Photo: Wolfgang Volz © 2016 Christo
The Floating Piers, Lake Iseo, Italy, 2014-16
Photo: Wolfgang Volz
© 2016 Christo

I was also excited to discover their current in progress project, “Over the River“:

Over the River (Project for Arkansas River, State of Colorado) Drawing 2010 in two parts 15 x 96" and 42 x 96" (38 x 244 cm and 106.6 x 244 cm) Pencil, pastel, charcoal, wax crayon, enamel paint, wash, fabric sample, hand-drawn topographic map and technical data Photo: André Grossmann © 2010 Christo
Over the River (Project for Arkansas River, State of Colorado)
Drawing 2010 in two parts
15 x 96″ and 42 x 96″ (38 x 244 cm and 106.6 x 244 cm)
Pencil, pastel, charcoal, wax crayon, enamel paint, wash, fabric sample, hand-drawn topographic map and technical data
Photo: André Grossmann
© 2010 Christo

I found the following quote particularly interesting:

What was involved for Christo and Jeanne-Claude in their multifarious temporary wrappings was the physical experience of enveloping, protecting and caring, – as they themselves put it – “the quality of love and care that humans show for things which are not made for eternity.”

I want to be able to enable users to be able to express their care for India in a similar way, but in an augmented rather than purely physical space. The success of the project will rest upon if I can engage users as successfully as Christo.

Centenary Square Birmingham 1992 Tess Jaray
Centenary Square Birmingham
© 1992 Tess Jaray

Desire Lines: The public art of Tess Jaray catalogues the public installations of the painter and printmaker, Tess Jaray. These works in brick and stone are on a different scale to her studio-based output, but are concerned with the same things – repetition, balance and order – or the disruption of those things. Her use of materials is something that I want to echo in this project – perhaps I could use the geology of India as a starting point for a colouring or textural scheme?

Desire Line by Alan Stanton.
Desire Line by Alan Stanton.

I also found the title of the book inspirational. Once you know what a Desire Line (or path) is, you can’t help finding them everywhere. I hope they will emerge in my sculpture too.

Atlas of Novel Tectonics is by Reiser + Umemoto, a design practice based in New York. It’s a beautifuly designed book, with inserted colour prints that fold away elegantly to reveal captions. The written content is equally elegant, with several concepts or definitions jumping out at me as being particularly relevant to this project.

In the foreword by Sanford Kwinter the philosophical interest around diagrams – both external and internal ones. I’m hoping that users will create many local diagrams of both kinds with this project, starting from the initial geological facsimile.

Their idea of Fineness:

Fineness breaks down the gross fabric of buildings into finer and finer parts such that it can register small differences while maintaining an overall coherence. The fineness argument is encapsulated in the denstities of a sponge: too fine and it acts like a homogenous solid; too course and it becomes constrained by its members.

The Fineness of this sculpture is going to be critical aesthetically and technically.

Intensive and Extensive differences, drawing from a quote from Manuel DeLanda’s “Intensive Sciences and Virtual Philosophy”:

If we divide a volume of matter into two equal halves we end up with two volumes, each half the extent of the original one. Intensive properties on the other hand are properties such as temperature or pressure, which cannot be divided.

This sculpture will clearly have extensive properties of area and volume, but what are it’s intensive (or gradient) properties? Colour? Pressure? Density? Speed? Elasticity? Duration?

Classical Body/Impersonal Individuation. In this part of the book, the authors use the example of a skateboard ramp to rail against Anthropocentrism:

[A skateboard ramp] is an intervening technology that belongs to a totally different pattern of order upon which the human works. The ramp augments the body; it is an extension of the body via the vehicle of the skate, but it does not represent it.

Such an extension of performance belongs to a larger class of singularities know as impersonal individuations. Like the sunset or a time of day, these intense and unique conditions  emerge out of the material world. They have manifold meanings projected onto them, but they are not the product of meaning.

I want this sculpture to be like the sunset in this way.

Matter/force relationships – here the authors discuss how to make the relationship between matter and force visible in varying scales. As part of this, they discuss using Voronoi patterns to express forces on a variety of scales:

from Structure to Space to Program.

This sparked a thought in my mind, that I could make the interaction around the sculpture such that each time someone touches it makes a new singularity – however, this would be incompatible with the initial landscape – which is based around a cartesian grid of heights.

I still have the second half of the Atlas to read, but that will have to wait for a later post.

Categories
A piece of Art as big as India

MVP, A-Frame, the entity component system pattern, how to display the sculpture, augmented reality and backend choices

I’ve been looking for a framework that will allow me to start prototyping this project – trying to get to a Minimal Viable Product as soon as possible.

mvp

The above image illustrates the concept well.  (the creator of the above image) goes into more detail about the idea in this blog post.

Quoting from the original proposal for the project:

The aim of this project is to create a digital sculpture as big as India itself, accessible from anywhere in India by anyone with a smartphone – as well as people outside of India via the web.

It’s therefore essential to me that this work is viewable by as many people as possible. The web seems the logical place to do that! In addition, as the sculpture is three-dimensional, I know that I need a framework that can render that kind of information via websites. It turns out that things have moved on dramatically since the days of VRML, with three.js the de-facto library for displaying with 3D content in a standards compliant way.

Virtual Reality (VR) is a very hot topic of research and development at the moment, and with that in mind I wanted to make sure that whatever system I developed to display the sculpture would be compatible with as many VR platforms as possible. A-Frame is the perfect framework to do that:

  1. It’s built on top of three.js
  2. It’s compatible with not only the latest VR headsets, but also lower cost platforms such as Google Cardboard – and even degrades gracefully if someone views A-Frame content without a VR headset (such as via a desktop computer or mobile phone)

Digging deeper into A-Frame Frequently Asked Questions (FAQ) led me to discover that it uses the entity component system pattern – something that I had my attention drawn to some time ago from my brother, as well as via the Unity platform. As the FAQ states:

A-Frame is based on an entity-component-system pattern (ECS), a pattern common in game development that emphasizes composability over inheritance:

  • An entity is a general-purpose object that inherently does and renders nothing.
  • A component is a reusable module that is plugged into entities in order to provide appearance, behavior, and/or functionality. They are plug-and-play for objects.
  • A system provides global scope, services, and management to classes of components.

ECS lets us build complex entities with rich behavior by plugging different reusable components into the sockets on the entity. Contrast this to traditional inheritance where if we want to extend an object, we would have to manually create a new class to do so.

ECS grants developers the key to permissionless innovation. Developers can write, share, and plug in components that extend new features or iterate upon existing features.

Having previously used an object orientated programming while using openFrameworks, I’m excited to try this new approach with this web-based project.

This is how I described the sculpture in the original proposal to the British Council:

Imagine a virtual layer of silk as big as the subcontinent, seeming to float in the sky above..

But what is the best method of displaying this digital silk? Voxels à la Minecraft? After disappearing down a rabbit hole of the differences between voxels and pixels, a fantasy console, dynamic lighting for images, voxel drawing programs and the mathematics behind computer graphics, I started searching for ways of making an imaginary map that could sit atop India.

I found a Javascript library for displaying mobile imaginary maps and a brilliant article on creating fantasy maps, but then started thinking about how to make the form of the sculpture – I didn’t want to make something random, but I wasn’t sure how to make a form on the scale of an entire continent.

Again referring back to my original proposal:

Users in the same physical location will be able to see other people’s interactions with the surface from the same location – they’d be able to raise or lower their part of the surface in real time by simply clicking their mouse or touching their mobile phone screen – making a new digital landscape above the physical geography of India.

I realised I already had the answer – the topology of India itself would be the starting point for the sculpture, which would then be altered by the actions of users in real-time.

How could I get the entire landscape of India in a digital format? I quickly discovered Earth Explorer from the US Geological Service, but found the user interface clunky to say the least – and which dataset is the best? The Shuttle Radar Topography Mission was the consensus opinion. Google provides a way of getting elevation data for any point on Earth, but I knew I didn’t want to have to rely on an external server for this starting point – I wanted to sculpt it myself and allow users to sculpt it too. Thanks to Patricio, I saw a recent announcement by Amazon and Mapzen (the company he works for) around open sourcing global elevation data.

After finding an open source Javascript 3D world viewer and re-visiting the brilliant OpenStreetMap, I realised that it was important to stick to the MVP principle and see what was available in the existing A-Frame/three.js ecosystem.

One of the best things about three.js is it’s quantity of examples – allowing users to quickly see the possibilities of the platform. Whilst browsing the examples, I found this “Geometry Terrain” and this “Dynamic Terrain” that reminded me of my original proposal, but inverted – i.e. in the skies above.

There are a variety of articles on how to create three.js visualisations of real world landscapes, but was there anything up to date and available for A-Frame?

Browsing a handy list of awesome things made using A-Frame, I discovered a component for making a heightgrid, which looks like a great start – as well as a component for taking terrain data, with several examples.

VR is all very well, but this project requires something different – Augmented Reality (AR). AR has gathered mainstream attention recently with the release of Pokémon Go as well as attention in the world of tech startups – especially around Magic Leap.

Creating mobile AR with A-Frame would require the combination of live video from a mobile phone camera with three-dimensional imagery super imposed on top. At luck would have it, whilst being “lazy like a fox” and browsing through the A-Frame blog led me to discover this project, by Dietrich Ayala:

aframe-ar

Unfortunately, the example is only compatible with Firefox on Android, but it’s certainly a great place to start.

This got me to thinking not only about the number of mobile phone users in India, but also how many of those use smart phones and how many of those users were using Android.

There are 1,034,253,328 mobile phone users in India, with 29.8% owning smartphones and 97% of those using Android. I make that a potential audience of 298,961,267! Naturally, not all of those will meet the minimum hardware requirements for A-Frame.

With the choice made on the front end for the MVP, I started to thing about the backend of the system. My priority was a backend system that could had a thriving community, would allow real-time interaction and would be able to deal with Geospatial data in a sensible way.

I starting thinking about making a system using Python that would allow for real-time interactions:

The idea was to use WebSockets to update viewers landscapes as other users interacted with them.

After taking some advice from Szymon, Igor and Patricio, I decided to plump an MVP built-in Node.js as it was the system that was already being used for several A-Frame components and had a thriving community around it. Both Patricio and Szymon recommended that I use PostGIS as a database platform as it was designed to deal with Geospatial data and relations within the database engine.

In summary, the first pass at this system will consist of the following platforms and components:

Categories
A piece of Art as big as India

The beginning

This post marks the beginning of a new chapter in my life, as an independent artist and designer.

The first project I am embarking on is a research and development project for the British Council in response to their UK-India 2017 Digital Open Call.

I’ll be working on this first stage of the project in the open over the next month and a half or so, before going for testing with audiences in India. I’ll then find out if it’s been selected to go to completion in 2017.

I’m very excited about documenting the whole process here on my website as well as sharing all my code on the Github for the project.

The title of the project is “A piece of Art as big as India”. It’s an augmented reality sculpture that will be as large as the subcontinent itself – perhaps even larger. It will be accessible via the WWW, SMS and mobile devices. Here is an early visualisation of how the sculpture might look:

apieceofartasbigasindiasketch