Introduction: How to Make a Remote-control Sentient Web-puppet by Hacking Twitter, Google, Skype, Arduino and Processing!

About: I like making all sorts of stuff, out of found materials: furniture, wild food, whatever! I've learnt loads from generous people out there, so reuse any useful ideas that you find here...
How to manipulate a physical object over the web just by using common web services and their accessible data feeds, with some added open source stuff to decode and manipulate that data, and ultimately use the data to move and control physical hardware.

Twitr_janus - a prototype web-controlled puppet

This Instructable describes Twitr_janus  - a puppet I made to see if it was possible to mash up free digital web services (Twitter, Google Spreadsheets and forms, Netvibes and Skype) with open source hardware and code (Arduino language and environment, Processing and related Processing libraries) and use them to manipulate an object over the web.

It turned out it was indeed possible!  

See how Twitr_janus works and see how you can use these ideas to build your own remotely-controlled physical objects. It was built from cheap, easily available stuff, some of it salvaged. I made a puppet, because I just like weird, creepy things. The principles it demonstrates could easily be applied to control all sorts of other objects you could build yourself.

Here's Twitr_janus in action, describing itself and how it works...




Summary of what it can do...

A puppeteer can remotely communicate over the web using Twitr_janus' data-activated head.

The puppet can:
  • speak tweets sent to its Twitter account
  • speak longer sentences that have been input as text into a field in a Google spreadsheet
  • move its jaw in time with its speech, using a car door-lock actuator (linear motor) controlled by Arduino which converts audio output into control data to trigger lip-synced movement
  • position its remote-control eyeballs with Arduino-controlled micro servos driven by data from fields in the same Google spreadsheet 
  • be commanded from a control interface hosted in a Netvibes page - created by hosting a hacked version of the standard Google input form (made by modifying the form html to restrict the data values, but riding the Google submit script.
  • be woken up remotely over the web with Skype,  to turn on sight and hearing via an HD video camera
  • use the webcam to allow the puppet operator to see what the puppet eye is looking at can see 
  • use the webcam built-in microphone to allow the puppet operator to hear what the puppet can hear
Note - this instructable is a summary of the major steps that were involved in building a working, data-driven physical object. It introduces the concepts and explains the ways its features are made to work, but does not go into minute detail.

More full detailed descriptions of each step are available in posts on my Making Weird Stuff blog 
There are lots of these - too many for an Instructable. Where relevant though, these detailed discussions are linked to on the steps here. 

A very short summary of the project is also available here:
makingweirdstuff.blogspot.co.uk/2012/11/twitrjanus-overview-november-2012.html

Processing and Aruino code created to make it work is available on GitHub (as straight file downloads). For details see the steps later in this Instructable. Be warned, it's as roughly fashioned as my physical handiwork. Apologies to purist coders. It's freely shared for ideas, but contains some left over functions and snippets that were developed, but not necessarily used. Some were left in the sketches, so copying everything is not recommended. Some of it may be useful. It's built on top of other people's open source stuff so take what you can use.

This project was first shown as to demonstrate a working data-driven object prototype, at the hacking workshop:
"Slack Day" at Museum Computer Network, Seattle 2012.


I'm adding it to Instructables too, as there are loads of people here who might find at least some of it useful. Feel free to hack and modify any ideas here. I learnt a lot doing this from the various open-source communities, especially Arduino and Processing.

Step 1: What You Can Get From This - the Basic Elements That Make Twitr_janus Work

Twitr_janus was an attempt to see if I could hack up web stuff to make a remote control puppet - a sort of physical avatar. It is really just a puppet head, a bit like a ventriloquist dummy, but one that is wired up to the web to make it come alive!

As a bit of hackery, it worked a treat.

In this Instructable I have separated out the various elements that make it work, so you can steal any ideas that help. I have tried to keep it simple here. If anything is of potential use, the more detailed explanations are usually available in posts on my Making Weird Stuff blog. Links are provided here where relevant.

The basic building blocks that make Twitr_janus work are: 
  • The physical head, built from papier mache, and moulded plastic (hot glue in fact)
  • Input sensors for sight and hearing (using a common webcam with built-in mike)
  • servos for controlling eyball and jaw movement
  • LEDs for indicating when web data was being received
  • A connected computer running a programme for
    • listening to the web for command data to control the head (using url parsing in a Processing sketch) by communicating over a serial port with an arduino board
    • allowing text to be spoken (using a text-to-speech library in the Processing sketch)
    • running Skype to allow the head to relay what it can see and hear back to the operator
    • sending audio signals to the arduino
  • An arduino board listening for control data from the mother computer:
    • to convert it into output control data for the eyeball servos
    • to convert analogue audio signals into control signals for the jaw motor
    • to activate separate LEDs to indicate the sources of data (Google RSS and Twitter API)
  • Remote operation using
    • any web connected device capable of running Skype, Twitter and Google Doc. This was possible with an iPad or even iPhone, but was only practical with a full size computer (laptop)
    • a control form embedded in a Netvibes html page, pimped to piggy-back the back of a Google Spreadsheet form submit script
This instructable shows how, in case you ever might be inclined to nick some of the ideas.
The most useful stuff includes:
  • how to strip data from RSS feeds and APIs using Processing to repeatedly listen to the web
  • how to use expressions in Google Spreadsheets to pass special delimited strings to differentiate different data 
  • how to set up a custom html form with pre-set control values, that uses a Google spreadsheet form submit script to   the spreadsheet
  • how to detect new data, but ignore old messages that have been received already
  • how to send the data over a serial port to an Arduino
  • basic servo, relay and LED control with the Arduino
  • text-to-speech conversion using free Processing library
  • audio peak detection with Arduino to trigger a servo to operate the jaw
  • how to remote kick-start a dormant Skype contact to wake up sight and hearing
  • making a lightweight skull using papier mache over a remeable model
  • using hot-glue as a casting material to recreate a face mask model
  • lots of other silly minor details that make it all work
Explanations in this Instructable cover most things briefly. For some components, the full details on the development of specific elements may be via links to posts on my Making Weird Stuff blog. These offer much more detailed explanations.

Twitr_janus is a reflection on how we take for granted "free" web communication tools - handy services like Twitter, Google, Skype - and how when using them, we develop online personas. However these personas that represent our words are usually no more than text boxes on screen with an associated and usually rather dull 2D photo.

Twitr_janus was an attempt to make a real physical representation of online activity. The challenge was to see if I could make such a monstrosity work using the common, but actually quite advanced, data-communication channels and features that web-based services provide for free.

Step 2: Using Processing to Listen to the Web for Commands (with Full Code)

The most important thing needed to control Twitr_janus over the web is, of course, to be able to listen to the web.

The brain that receives data to activate Twitr_janus' head is a programme (sketch) running in Processing. This runs on a computer attached to the head. The computer is connected to the web.

Processing is simple to learn and has a great open community and easy and accessible documentation. The makers of Processing describe it as...

"an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing also has evolved into a tool for generating finished professional work. Today, there are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production."


Processing is free to download and use. Visit Processing .org

The key Processing features used to make Twitr_janus' brain include:
  • a handy method called loadStrings(); which , which can pull in feed data from an external URL (e.g. RSS feed or API call)
  • various handy ways to parse the feed strings received to extract the actual control data
  • the ability to set up a serial connection with the USB port to send data to the Arduino
  • a third party text-to-speech library GURU TTS which can turn text into audio speech
    See http://www.local-guru.net/blog/pages/ttslib
The complete Processing sketch is available to download here from GitHub as a text file:
github.com/downloads/rosemarybeetle/Twitr-Janus/twitr_janus_code15.txt

It includes code to:
  • import the Guru text-to-speech library
    >>>
    import guru.ttslib.*; // NB this also needs to be loaded (available from http://www.local-guru.net/projects/ttslib/ttslib-0.3.zip)
    Serial port;
    TTS tts; 

    <<<
  • referencing Twitter API calls
    >>>
    String twitterApiString = "https://api.twitter.com/1/statuses/user_timeline.json?include_entities=true&include_rts=true&screen_name="+twitterUsername+"&count=0";
    <<<
  • referencing Google Spreadsheet RSS calls  
    >>>
    String gssApiString = "https://spreadsheets.google.com/feeds/list/0AgTXh43j7oFVdFZJdklXTU1lTzY5U25sc3BJNjRLRUE/od6/public/basic?alt=rss";
    <<<
  • parsing feed data to extract control data (this is the google spreadsheet data being parsed into an array from the RSS feed)
    >>>
    String [ ] texty = loadStrings(gssApiString);
      String [ ] texty2 = split (texty[0], '¬'); //  pulling out data with stop character

      String [ ] texty3 = split (texty2[4], '<'); // get rid of trailing text after <
      gssText = texty3[0];
      gssTextLength= gssText.length();

    <<<
  • making a serial port connection >>>
    println(Serial.list());// display communication ports (use this in test to establish fee ports)
      //if (Serial.list()[2] != null){ // error handling for port death on PC
        port = new Serial(this, Serial.list()[2], 115200);
      //}

    <<<

  • sending data to the Arduino via the serial port
    >>>
    (this code is writing the eyeball position stripped from the Google data to the port. The Arduino will use it to reference an array of preset positions)
    port.write(gssEyeballUpDown);// send up down value to board


    This code is sending a code number to the Arduino, which if detected will trigger the blue LED...
    port.write(30);
    <<<
  • Converting the data into speech by calling the TTS library
    >>>
    The google text data used as speech...
    tts.speak(gssText); 

    The Twitter tweet used as speech...
    tts.speak(tweetText);

    <<<

Download the full Processing Sketch

It's not perfect, but the complete Processing sketch is available to download here from GitHub as a text file:
github.com/downloads/rosemarybeetle/Twitr-Janus/twitr_janus_code15.txt
For easy viewing, here is an image facsimile of the text
NB - I had inititally intended to get an Arduino script to do this, but couldn't find a way to do it. The arduino would need to have a direct Internet connection and to be able to poll the web repeatedly. This may be possible an Ethernet or wireless shield, but I couldn't find an easy way to do this. 

Step 3: Using Arduino to Control Physical Actions Based on the Data Received (with Code)

Once data had been extracted from the web with the Processing sketch, an Arduino microcontroller used it to activate Twitr_janus's head. The data was converted into control signals for servos inside Twitr_janus' head, which moved its eyes and jaw. This is explained below...


Controlling an Arduino from a PC

The Arduino is doing several things:
  • It is attached to the master computer - digitally via a USB connection and also with analogue input from the audio output of the computer
  • It maintains a serial port connection with the master computer, over which it is constantly checks for new control digital data that has been sent over the web to the master computer. 
    >>>
    This line is Arduino making the serial connection...
    Serial.begin(115200);

    This line is calling a routine that will check the connection fo rdate...
    checkSerial ();
    <<<
  • It checks every 10-30 seconds and compares incoming data to the last received action and only acts on it if different.
    >>> void checkSerial ()
    {
    It checks if there is a connection...
    if (Serial.available() > 0) {

    Reads the data...
    incomingByte = Serial.read();
    }

    If the data is 30, it will trigger the Twitter routine (twitterCheck()...
    if (incomingByte==30 ) //
    {

    twitterCheck();
    }

    If the data is between 0 and 25, it's Google data, so call the Google checking function - googleCheck()
    if ((incomingByte<=25) && (incomingByte >0) ) // google data is data coded as an integer  between 0 and 25
    {
    googleCheck();
    else {// No point calling check functions if not serial data received. This is error handling clause
    Serial.println("I received nothing ");
    }
    }// enf of checkSerial
    <<<
  • If new data is received, it will light up indicator warts on its forehead. These are lit by LEDs. 

    An orange wart lights when Google data (integer between 0 and 25) has been received, the data is used to reposition the servos and the blue wart is deactivated
    >>>
    void googleCheck()
    {
    digitalWrite(twitterFlagPin, LOW);
    eyeLeftRight= 2*(incomingByte-1);
    eyeUpDown = (2*incomingByte)-1;
      if (incomingByte<=25 )
      {
         digitalWrite(googleFlagPin, HIGH);
        servoLeftRight.write(eyePos[eyeLeftRight]);
    servoUpDown.write(eyePos[eyeUpDown]);
       }
      }

    <<<

    A blue wart lights when Twitter data has been received (integer 30) and the orange wart is deactivated  
    >>>
    void twitterCheck ()
    {
      // this function
    digitalWrite(twitterFlagPin, HIGH);
    digitalWrite(googleFlagPin, LOW);
     }

    <<<
  • It will convert incoming eyeball data into one of several pre-determined position control values for each of the two servos inside the head. One for up/down, one for left/right. It uses the Arduino servo.h library to do this
    >>>
    The data coming in is used to access data from an array which has pre-determined values in it (these are the servo values) 
    int eyePos [] = {115, 60, 115, 60, 115, 90, 115, 115, 115, 115, 115, 60, 115, 60, 115, 90, 115, 115, 115, 115, 90, 60, 90, 60, 90, 90, 90, 115, 90, 115, 60, 60, 60, 60, 60, 90, 60, 115, 60, 115, 60, 60, 60, 60, 60, 90, 60, 115, 60, 115};
    <<<
  • It will activate the jaw lip-sync by monitoring the analogue audio input. It uses voltage peak detection to do this. If the sound wave rises above a pre-determined threshold, the jaw will open, and it will be forcibly close if the voltage drops below the threshold. This gives a pleasantly startling staccato.
    >>>
    void analogPeakCheck()
    {
      // @@@@@@@ this function is used if you are using raw audio output from an analog amplifier into the Analog pin 0
    valueAnalogIn = analogRead(analogInput); // This is checking for output above a threshold voltage to trigger jaw signal
    if (valueAnalogIn>thresholdAnalogIn)
    {
    digitalWrite(speechFlagPin, HIGH);
    digitalWrite(speechFlagPinLED, HIGH);


    else {
    digitalWrite(speechFlagPin, LOW);
    digitalWrite(speechFlagPinLED, LOW);


    }// @@ end threshold checking //
    }

    <<<
The individual code for controlling the Arduino is adapted from basic code on arduino.cc/

Download the full Arduino Sketch

The complete Arduino sketch is available to download here from GitHub as a text file:
http://cloud.github.com/downloads/rosemarybeetle/Twitr-Janus/twitr_janus_arduino_09.txt

For easy viewing, here is an image facsimile of the full text

Step 4: How Twitr_janus Speaks Using Text-to-speech (in Processing)

Speaking open data

An essential point I was trying to test with Twitr_janus was whether I could get a puppet to speak open data over the web. Initially it was intended this would just be tweets from @twitr_janus account on Twitter and this was how Twitr_janus got its name.

Twitr_janus did indeed sucessfully speak tweets, stripped from the Twitter API with Processing.

Making Twitr_janus speak  tweets was done without using API keys and by parsing the API string, rather than referencing fields properly. This was to avoid having to register as a Twitter developer, etc. This crude method had some limitations, for example tweets with control characters, confused the parsing scripts leading to messages being truncated when being decoded.

The parsing model worked much better with Google spreadsheets, where the raw data could be appended with extra stop data to help the parsing process using expressions in the spreadsheet fields. Google spreadsheet data was not only easy to use for speech, it was possible to easily use it for eyeball control. Because the Google spreadsheet method is the easier and more versatile of the two approaches, this is what is described below .

How data is sent, coded and decoded, step by step


It helps to think through the flow of data...

Before starting on this I found it helpful to scribble down a flow diagram  to get a feel for the building blocks needed. The mouth and TTS represent the function of text-to-speech conversion.

This is not a technical drawing!

Twitter to speaking head convertor

Part 1 - Entering data in the Google spreadsheet

There were three pieces of data that needed to be sent from the spreadsheed, to be decoded by Processing. These were the two variables eyeballUpDown_stop (columns F) and eyeballLeftRight_stop (column G) which are coded positioning data. Later once decoded they would be used to drive servos with an Arduino attached to the puppet head. The third piece of data was text_stop, which is what was to be further processed in Processing to create the text to speech.

In the final version only two pieces of data were sent. The speech data, and a single eyeball data value. This may cause some confusion when interpreting the code! (eyeballUpDown was used, though not renamed),

A single eyeball position variable could be used instead of two because the data being sent simply represented one of 25 positions. Although two control values  are needed by the Arduino to position the eyeballs (one for the up/down servo, one for the left/right servo) the single variable sent was used to access corresponding pairs of  values, stored in an array inside the Arduino sketch.

In the cells, you can see that the data has been preceded by the ¬ character. This is added to whatever data is entered manually using a concatonating cell expression. It is used as a stop character to delimit the data strings later. These characters will show up in the RSS feed and the Processing script uses them to tell where one piece of data stops and the next starts. (control-character delimitation)

Initially data was entered manually into fields in the spreadsheet, as below. This is not ideal, as you have to know the exact position values to send, which is hard to remember and easy to mess up...

google-rss-drive

To avoid manually entering data into the spreadsheet, the built-in Google form was used. This is available for any Google spreadsheet was used.

Twitr_janus Google native form
Which looks like this...
Twitr_janus Google native form

However, the standard Google form still needed the eyeball numeric positioning values to be entered exactly, so it needed to be modified.


To create a more useful form with easy control over preset values for the eyeball variables the basic html of the form was transferred to a web page (an html widget on a NetVibes page), where it could be pimped up a bit.

The form in Netvibes as it looked to the puppet operator

Google data via netvibes form

The free text inputs were swapped for radio button inputs with preset values and corresponding human-readable position text.

The main thing was to still use the original Google field names so that the data options would all be fired into the same cell in the spreadsheet when the new form was submitted

You can see this in the html view of the form below. All the options follow the same pattern as below


gogxxx
  • xxx is a control value that will send data that corresponds to the physical eyeball position "positionxxx"
  • The value of xxx is actually a reference number to the value within a specific element of an array (there are 25 different preset positions, hence there being a radio button needed for each integer value between 1 and 25, used to referenc the array values between array[1] and array[25]
  • positionxxx is a plain english description to show the operator, to allow them to choose a target eyeball position
  • "entry.1.single" is the Google field name that must be kept the same, so it will put the value xxx into the correct cell in the spreadsheet. This is the same for each radio button, because the different values are effectively choice of values to put in that one field

By reworking the form, a more visual interface was created, so it was easier to see where the eyeballs would move, whilst still allowing speech text to be entered.

The other line that is important that is also taken from the orginal Google form html is this one:

gog2
It's the form submit action, and must be kept the same.

This technique of pimping a simple Google form has some advantages:
  • It allows you other possibilities like adding continuous sliders using the HTML 5 feature <input type="range"/>
  • you can create a method of injecting data into the spreadsheet without any form of API key. You just need to know how to tweak html form controls and values.
  • you can input the data in one field, but pull the data out from another field that uses the input data, but modified in some way to extend its versatility, as required
The disadvantage is
  • on submitting the form, Google will take you back to the original form not your pimped form, so you need to do a page refresh after each submission to reload you form

The form in Netvibes - html source code (image below)


Google data via netvibes form

Part 2 - getting the data out at the other end of the web

The data entry method described above represents the first link in the rather rubbish data flow diagram shown at the top of the page (although it shows Twitter as the data source, not Google). The data entry step happens on a control device, used by the operator remotely from the Twitr_janus puppet head. It is, in effect, the primary control interface.

At the other end of the web, Twitr_janus' head was connected to a separate computer running its Processing brain sketch. This was polling an RSS data feed from the published spreadsheet. To get this feed, the spreadsheet had to be published. When you publish a Google spreadsheet, it is given a public RSS feed, with a dedicated URL. This is used later in the Processing script to parse out the data. The Url looks for a Google RSS will look like this...

google-rss

And the output looks like this...
google-rss-output

In the RSS output, the stop character ¬ is clearly visible (second from last line, before the fieldnames: "eyeballupdownstop", "eyeballleftrightstop" and "textstop", and the corresponding values of 13, 22 and "Hello my name is..."

The Processing sketch that is Twitr_janus' brain is polling this URL repeatedly, and uses the ¬ character to strip out the data...

Here is the code that is parsing the google spreadsheet feed to extract the control data and passing it into an array. It is looking for the ¬ character first, then the < character
>>>
String [ ] texty = loadStrings(gssApiString);
  String [ ] texty2 = split (texty[0], '¬'); //  pulling out data with stop character

  String [ ] texty3 = split (texty2[4], '<'); // get rid of trailing text after <
  gssText = texty3[0];
  gssTextLength= gssText.length();

<<<

This is then checked in Processing against the last received data.  If it is different, then a new instruction has been received.


Part 3 - turning the data into speech

Any new data is passed to a Text-To-Speech library, and comes out spoken in a fairly crude raspy computer-generated voice.

Credit where it's due.

The library is GURU TTS, available from http://www.local-guru.net/projects/ttslib/ttslib-0.3.zip
A big shout out to the person who made this. The blog from which this was downloaded is a bit flaky and It's not that clear who the author actually was, but it appears to someone called Nikolaus Gradwohl. I hope that's right!

The guru tts library was downloaded and had to be installed into the Processing folders, so it could be imported into the Processing sketch. 

The blog it features on is here:
http://www.local-guru.net/blog/pages/ttslib

This in short, is what enables Twitr_janus to talk by speaking data. 


Part 4 - making the jaw move in time to the speech


The sound output from the generated speech needed to make the jaw move. That is, it needed to be lip-synced. An output lead was connected to the computer, and this was passed trough a simple audio amplifier circuit, salvaged from an old computer speaker (this is shown here to the right of the Chelsea Buns).

Hacking open audio amplifier

This gave a large enough sound wave to detect reliably (about +-3v) ...

Hacking open audio amplifier

To make Twitr_janus' jaw move in time to its speech, the audio output from the Processing text-to-speech needed to be lip-synched to the jaw mechanism.


The basic idea is that the Arduino script repeatedly checks the audio for peaks, and uses these to trigger the motor on and off. This is illustrated (rather roughly) below...

Car door lock motor circuit


The amplified laptop audio output signal was fed directly to the analog input of the Arduino board. On the Arduino, a control sketch repeatedly checked the peak voltage Arduino converts the analog input into a number, which it checked against a preset peak threshold value.

If the signal rose above the peak, the Arduino triggered a relay circuit to power on a 12V car door actuator (a linear motor). If the voltage dropped below the peak it would cut the power. This gave a jerky motion based on the peaks of the speech.

In the Arduino sketch, th code looked like this...
>>>
void analogPeakCheck()
{
  // @@@@@@@ this function is used if you are using raw audio output from an analog amplifier into the Analog pin 0
valueAnalogIn = analogRead(analogInput); // This is checking for output above a threshold voltage to trigger jaw signal
if (valueAnalogIn>thresholdAnalogIn)
{
digitalWrite(speechFlagPin, HIGH);
digitalWrite(speechFlagPinLED, HIGH);

}
else {
digitalWrite(speechFlagPin, LOW);
digitalWrite(speechFlagPinLED, LOW);


}// @@ end threshold checking //
}

<<<

Perfect!

Here you can see the hinged mouth of the puppet, to which the car door actuator was attached...



For a detailed look at how the Processing brain works, you can read command by command descriptions on this post on my Making Weird Stuff blog: makingweirdstuff.blogspot.co.uk/2012/08/twitrjanus-is-now-speaking-data-sent.html

Although this description applies to a Google spreadsheet RSS feed as a data source,  the same principle applies to a string obtained by calling the Twitter API. 

Step 5: Designing the Puppet Head

Twitr_janus is named fairly predictably after Twitter, but also Janus: the Roman god of doorways and transitions. As good a name as any for a puppet driven by web doorways.

The main pen and ink drawing shows an intiial design, which was to make Twitr_janus a twin-headed puppet. (Janus was usually shown with two head or two faces, one looking forward, the other  backward.) This would allow the puppet operator the ability to look at both the audience and the puppet .

Eventually this was rejected for a number of reasons. Instead, a number of ideas were scribbled down to play with the look. Everyone has their own method for ideas-generating. I like fast sketching using either pencil, charcoal or ink and wash. This is mainly as they all allow lines or shading to be built up rapidly.

Twitr_janus modelling sketch

I wanted it to look quite grotesque, and at least a bit creepy! 

Some variations are shown.  Some of the ideas coming out appeared to have memory traces of the Brain from Pinky and the Brain...
The right hand one explored a large hinged-jaw version...

Twitr_janus head designTwitr_janus head designTwitr_janus head designs

The abnormally large craniums in these sketches is not entirely just for grotesque effect. It is also because the puppet would eventually need to house servos and control circuitry and electronics...

Twitr_janus head designs

Eventually one was chosen.
Based on this, a 3-d puppet head needed to be built, so a face was modelled in clay to cast as a mould. Here's the big lump of clay...

Twitr_janus face mask

And here's the finished model, ready for casting. Note - the eye sockets are wider than the original design to allow the eventual eyeballs to have a greater field of vision. If you look carefully, you can see that the model is resting on a perspex turntable (as used for cake decoration). It makes it easier to access when modelling.

Twitr_janus face mask

To get the final face mask, a mould was taken from the model using silicon casting latex...

Twitr_janus face mask mouldmaking

For full details of how the silicon mould was made with a cardboard casing see this post:

makingweirdstuff.blogspot.co.uk/2012/09/making-silicon-latex-mould-for.html

The head was built up in two main parts: The face and the cranium. The face mask here is not the final version. It's one of several latex copies made during testing. It is being used here as a template to gauge how big the skull should be and to carve it to fit...

Twitr_janus' cranium

The polystyrene was covered in several coats of laminated papier mache (brown paper and PVA). The polystyrene is removed to form a hollow cranium

The final hard face was created from the silicon mould (see previous step). The moulding material used was hot glue. This was melted into the mould and left to set.  

Here you can see the two parts before joining...

Making Twitr_janus eyeball and video camera

And below after the two halves have been joined. You can see that the lower lip has been cut out and hinged to allow mouth movement for the puppet later...

Twitr_janus skull

For more details of how this head was built up see:

makingweirdstuff.blogspot.co.uk/2012/09/making-twitrjanus-skull.html

For details of papier mache techniques see this post:

makingweirdstuff.blogspot.co.uk/2011/12/head-for-pantomime-goose-costume-part-2.html

Step 6: Giving Twitr_janus Sight and Hearing With a Webcam Eyeball

To allow Twitr_janus to react to its surroundings it needed to be able to sense them. The simplest way to do this was to use a webcam with a built-in mic. This enabled the puppeteer to remotely see and what Twitr_janus could see and hear.


Making Twitr_janus eyeball and video camera

The eyeballs are made from deodorant balls, which conveniently are hollow. As well as being easy to cut open, they are also rigid and do not deform if you cut bits off them. 

They also come with ready-made housings from the bottles they are contained in. These were cut off and glued into the back of the eye sockets of the puppet face mask. 

Eyeballs

Below you can see the exploded eyeball, the Microsoft LifeCam and the control rod. This is used to add leverage. Later the rods from the two eyes are jointed into a rig, that is attached to the servos to allow position-control...

@Twitr_janus' eyeball

To make Twitr_janus see, Skype was loaded onto the computer that was attached to the puppet head. Skype was signed into a specially set up account. Skype can be set to be woken up from standby and to connect to a call just by dialling it. You can also specify that only specified Skype contacts can do this.

The net effect was that it was possible to wake up Twitr_janus' webcam eyeball. The webcam also had a built-in microphone. This meant that it was possible to activate both sight and hearing remotely inside the puppet head from a remote location, as long as the control device had Skype loaded and was logged into an account with permission to activate the Twitr_janus Skype account.
.
In this picture, the web cam is being tested by aiming it at the Arduino...

Twitr Janus and its eye

And here you can see an iPad (left) being used as a remote control. It is making a Skype call over the web to the computer (right) to which the eyeball is attached. The close up of the Arduino on the iPad is what the eyeball on the floor is pointed at (slightly dark, in the centre).

Twitr Janus  remote communication with Arduino board

Here you can see the webcam still in its original casing being tested for rotation clearance inside the face mask. You can also see the hot glue and the reinforcing plastic gauze used to provide strength.

Making Twitr_janus eyeball and video camera

Here are the two eyeballs inside the skull. They are fixed into a jointed parallogrammatic rig. The two control servos are visible.
  • the servo to the left of the picture inside the rig, causes left-right motion by shearing the rig parallelogram, which is partly made up of the eyeball control rods (as above).
  • the servo to the right is coupled to the rig via a flexible, but rigid bike cable, via which it controls up/down motion.
Twitr_janus eyeball servo construction



Step 7: The Full Physical Electronics Systems Inside the Head

Eventually all the various parts of the head were put together inside the head...



Looking something like this...

Twitr_janus full electronics kit laid out

A new bone coloured base coat of paint was applied and the LED-illuminated warts were added...

@twitr_janus skull

With some extra texturising colour coats and with a stand added, eventually Twitr_janus was finished.

@twitr_janus skull

Step 8: Possible Ways to Build on the Concepts That Twitr_janus Demonstrates

Mashed up data-driven objects


The central objective in building Twitr_janus was to demonstrate a working example of a physical object that could be controlled using web applications and their data feeds and communicate in both directions.

This aim was achieved. Twitr_janus could sense its surroundings with video and audio and could respond to remote interactions by sending web data and using that data to manipulate its postion and make it talk.

In simple terms, it is a data-driven object.

Although it's a puppet, it could have been almost anything. It could also have had different sensors, such as PIR (Infra red) sensors for detecting motion, or smoke detector technology for chemicals or thermostats for heat.

Similarly, although the sensory data here was monitored manually by a puppeteer, an object could just as easily be built to react to stimuli automatically.

Fabrication of bespoke objects using 3D printing


Although not really to do with the data principles, there are also possibilities to produce rapid prototypes of such objects with 3d printing.

In this prototype, the head here was created manually by modelling in clay. It could as easily have been built from 3-d scanning of objects. For example a scan of a real person's face could make the puppet look like that person. The head could be constructed by scanning the various components (such as the face of a person, the lip of the deodorant bottles that form the eye sockets, the shape of the skull, etc.) and combining meshes to print a single monocoque shell...

A (crude) 3d mesh of the original physical face mask is  available on Autodesk's 123D community here:
www.123dapp.com/obj-Catch/Twitr-Janus-Mask/842675

A 3d rendering of Twitr_janus showing the physical form is available here:
http://www.123dapp.com/AssetManager/Index.cfm?stgaction=getProduct&subaction=preview&step=1&inttype=4&intproductid=1151611

Step 9: More Information, Credits and Links

Twitr_janus was built upon the generosity of various open source communities. Here are some references to further information...

Processing

In Twitr_janus, Processing was used to create a master brain. This received data streams from the web services, stripped out control data and then communicated it to an Arduino microcontroller board attached to the puppet head.

The Processing function used to strip the data from feeds and APIs was the loadStrings(); function.

Processing is free, open source and has a strong artist-led community of coders so there is a fair amount of help available on the web. It has it's own environment to code in, which is Java-based, so that may be an issue for some people, but does appears to be offering some javascript. I have not used the javascript version.

processing.org/
The main Processing site 

processing.org/reference/
Processing look up reference

www.openprocessing.org/
OpenProcessing is an "online community platform devoted to sharing and discussing Processing sketches in a collaborative, open-source environment."

You can upload and try out some sketches (mainly useful for visual ones, for which Processing was originally designed). Lots of sketches to down load and fork.

Twitr_janus eyaballs on open processing .org
This is an example sketch on Openprocessing.org. Its a sketch that I wrote to test out eyeball positioning functions. It moves 2-d eyeball graphics onscreen, in response to keyboard arrow keys. Freely available to download and hack/rip off as much as you want.

www.local-guru.net/projects/ttslib/ttslib-0.3.zip
GURU TTS (text-to-speech library for use in Processing) by Nikolaus Gradwohl (I think)
Guru TTS is the Processing library that enables Twitr_janus to speak. This was downloaded and needs to be installed into the Processing folders:

The complete Twitr_janus Processing sketch is available to download here from GitHub as a text file:
github.com/downloads/rosemarybeetle/Twitr-Janus/twitr_janus_code15.txt



Arduino

In Twitr_janus, Arduino is the microcontroller circuit board that manipulates the hardware inside the puppet head. This includes eyeball movement, jaw lip-sync to speech and illumination of the indicator warts on its head that light up as incoming data is received.

www.arduino.cc/
Arduino is "an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It's intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments."

Arduino language is based on Wiring and the Arduino programming envirinment is based on Processing as above.

Arduino is really similar to Processing. Sketches in both have a basic structure of:
  • initialisation code (include libraries, declare and initialise variables)
  • definitions for functions (such as checking twitter, moving eyeballs, etc.)
  • a central looping function (the main repeating function that checks for new states when the skethc is running)

arduino.cc/en/Reference/Servo
The eyeballs inside Twitr_janus' head are driven by servos. Within the Arduino sketch, the servo.h library is loaded. This does the heavy lifting, so you just have to send a position number to the board, and the library will convert it to a servo command. 

The complete Arduino sketch is available to download here from GitHub as a text file:
cloud.github.com/downloads/rosemarybeetle/Twitr-Janus/twitr_janus_arduino_09.txt

Twitr_janus and other related posts on Making Weird Stuff

makingweirdstuff.blogspot.co.uk
This is my blog where I post about making things. Mostly rather silly. These posts are about individual pieces of work needed to create Twitr_janus. Some otehr posts are included tha toffer more information on relevant techniques in other projects.

webcam Making a video eyeball - How to take apart a Microsoft Lifecam and insert it into a deodorant ball to make a video eyeball.
makingweirdstuff.blogspot.co.uk/2012/10/hacking-microsoft-lifecam-to-make-video.html




speechDetailed description of stripping speech data from Google spreadsheet:
makingweirdstuff.blogspot.co.uk/2012/08/twitrjanus-is-now-speaking-data-sent.html





sightMaking video and audio connections over Skype
makingweirdstuff.blogspot.co.uk/2012/06/twitr-janus-has-sight-and-hearing.html




relayBuilding a relay circuit to handle car door lock power switching from Arduino
makingweirdstuff.blogspot.co.uk/2012/08/a-self-contained-relay-circuit-unit.html




mouth-testHacking a computer amplifier to get a measurable audio signal for lip-sync triggering, including Arduino peak-detection code
makingweirdstuff.blogspot.co.uk/2012/10/hacking-out-audio-amplifier-for-text-to.html



mouldmakingCreating a silicon latex mould from a plaster face mask model.
makingweirdstuff.blogspot.co.uk/2012/09/making-silicon-latex-mould-for.html
Separate papier mache technique post
makingweirdstuff.blogspot.co.uk/2011/12/head-for-pantomime-goose-costume-part-2.html


mouldingsPrototyping face masks with latex test-mouldings, inclduing use of sawdust as latex strenghtening
makingweirdstuff.blogspot.co.uk/2012/09/first-mouldings-from-twitrjanus-face.html




mask-makingDesigning and creating a clay face mask model for face.
makingweirdstuff.blogspot.co.uk/2012/08/modelling-twitrjanus-face-puppet-mask.html




introBrief overview of Twitr_janus
makingweirdstuff.blogspot.co.uk/2012/11/twitrjanus-overview-november-2012.html




head-makingBuilding the puppet head by papier mache over positive cranium moulding
makingweirdstuff.blogspot.co.uk/2012/09/making-twitrjanus-skull.html





fitting-servosFixing servo mountings inside puppet head
makingweirdstuff.blogspot.co.uk/2012/10/making-servo-mounts-inside-twitrjanus.html





eyeballsTesting Processing script for eyeball control. Includes use of onscreen eyeball simulation
makingweirdstuff.blogspot.co.uk/2012/08/remote-control-eyeballs-using-remote.html





audioSetting up audio connections
makingweirdstuff.blogspot.co.uk/2012/10/hacking-out-audio-amplifier-for-text-to.html






Miscellaneous


These pages explain in fantastic detail, how someone took apart their Microsoft LifeCam. This was so useful - respect to this guy!
Gary Honis - Hacking open a Microsoft LifeCam
ghonis2.ho8.com/lifecam/lifecam1.html
UP! Contest

Participated in the
UP! Contest