Arduino

It’s been a while since my last post but even longer since I last dabbled in the field of electronics. Nice segue there huh?

The last time that I tinkered with electronic components was more than 20 years ago, when I think I was at sixth form college. I had an interest in electronics when I was in my very early teens and began playing with a soldering iron to solder components to a Veroboard (or stripboard), following instructions from a children’s book on electronics. Of course the internet wasn’t around then and all I seemingly had to go one was photos and diagrams in books and descriptions of how soldering should be done.

I then had an electronics project kit bought for me, or I bought one with pocket money, not sure which was the case. One of those wooden box affairs from Tandy (or Radio Shack if you’re in the U.S.) with little springs where you could attach wires to the components that were fixed to a rigid card base. It was interesting but I don’t really know that I learnt a lot from it.

When the subject of electronics eventually came-up at school, when I was 15 I guess, I simply lost interest. I think that possibly the theory didn’t interest or inspire me. Perhaps it was the teacher. More likely I just had more fun constructing circuits and experimenting; our school was probably short on practical experiments. Anyway. It’s been a few years.

The whole Arduino ‘project’ came to my attention a few years ago and last November I finally got around to buying an Arduino Uno, a small breadboard and a small range of electronic components. The plan being to follow some of the projects in the ‘Beginning Arduino’ book that I’d bought earlier last year.

Beginning Arduino

Beginning Arduino

The Arduino, along with the breadboard and components, have sat in the cardboard box in which they were delivered to me for the past 6 or 7 months (not strictly true as I did get the Arduino out to connect to my computer and to upload a flashing LED program). The other week I finally got them out and began to do something constructive with them. This does mean that I’ve placed my current Javascript-HTML5-’geolocation-game-thing’ project on hold. I can’t do everything and I’m also currently very busy at work doing my day job.

The first project that I followed from the Beginning Arduino book was to construct ‘interactive traffic lights’. I skipped some of the earlier projects because they were extremely simple but decided to try this traffic light one practically so that I would at least start using that abandoned box of electronic ‘stuff’ and get used to handling fiddly components again.

I had forgotten how fiddly they were. I’d purchased a pack containing some 160 resistors of different resistances and spent several minutes trying to find the 100 ohm resistor that I needed for the project. Of course once I’d found it I then did some calculations to determine what ‘current limiting resistor’ my Light Emitting Diodes (LEDs) required only to discover that I needed to locate a 150 ohm resistor from my pack. So that required further hunting.

Pack of 610 resistors

Pack of 610 resistors


150 ohm resistors

150 ohm resistors

Once I’d got all of the necessary components laid out I inserted them into my breadboard and then added wires to connect the breadboard to the Arduino and to complete the circuit. Actually I was a little lazier than this as I didn’t place all of the components and wires until another day.

I then copied out the code from the book into the Arduino development environment, compiled it and uploaded it to the Arduino device via a USB cable connected to by Macbook.

The initial result was the illumination of 2 of the LEDs; or did nothing happen at first? I can’t recall now. The little push-button switch wasn’t working anyway. When pushed it should have initiated the traffic light sequence but pushing the switch resulted in no change. I wasn’t sure if I’d placed the switch correctly (it had 4 pins and not just 2) and so I rotated it by 90 degrees and tried again. Still no success. I decided that the switch pins weren’t fitting into the breadboard holes properly and removed the switch from the board, added some extra wires to the board and then held these wires to the switch that I held in my hand. That did the trick. I then wondered why I was bothering to mess around with the switch as I had two wires that I could simply hold together momentarily to perform the same task as a switch anyway!

LED traffic lights

LED traffic lights

So that first project was useful in that it got me to investigate the field of electronics again. With the Arduino I’m combining that journey into electronics with my existing interest/hobby/job in programming. The act of programming the Arduino is the easy bit although there are some things to learn when it comes to communicating with the components that are attached to the Arduino. It’s very tempting to jump into some of the later projects in the book and begin looking at projects that involve controlling motors and servos. It’s also tempting to get hold of some of the Arduino ‘shields’ and just plug larger components together. With these shields, basically pre-built circuit boards that plug in to the Arduino, I could have the little device connected to the internet or controlling a set of motors or supplying a video feed. However I am telling myself to walk before I can run and to get back to some of the basics first. I’m also telling myself that it’s not a race and that I should just enjoy the experience, and I am enjoying it so far.

I guess that ultimately my aim is to build some form of robot, complete with motor control and sensors, but for now I’ll be content with getting LEDs to flash and to get a little Liquid Crystal Display (LCD) to show messages on it’s screen.

Looking at Git

I don’t seem to have posted anything of value for quite a while. Of course it’s always debatable as to whether anything that I write is of ‘value’! I’ve written several draft posts in recent months but none of them have come to anything. This time I’m going to see if I can draft and commit a post on Git. Nice segway from a reference about committing a blog post to talking about Git you may have noticed there.

I’ve used CVS and SVN at work and I’ve used SVN for any personal projects that I’ve worked on at home, and they have served me well. SVN in particular. I have no real reason to use Git at the moment other than to give it a try, as I’ve heard a LOT about it in last two years or so, and to gain some experience in using it.

I didn’t want to mess around with having to set-up my own server and I didn’t want to spend lots of money on it. Ok I didn’t want to spend any money. Initially I considered GitHub as it almost seems to be the de facto standard for hosting of Git repositories. The thing with GitHub is that you can host a repo with them for free if it’s an open source project but you have to pay for private projects. Of course I don’t have an issue with that, they need to have some kind of business model where they make money. However I didn’t want to host my little project as open source. Just on principle really. It’s not that I am being precious about the code that I write but it just doesn’t seem right in principle. If I was writing a code library or framework then I would definitely host the code in an open source repo but not for a little personal coding project.

I’ve gone for Bitbucket instead where small private projects can be hosted for free. I cannot comment on the quality of their free service too much at the moment as it’s early days but all seems good so far. They also have a free GUI application that is, at the time of writing this post, I believe available for Mac OS only? The app is named SourceTree. I did download and install this app but it seemed better to gain some familiarity with the use of Git at the command line initially.

One thing that I have noticed in using Git for a very short time is that using it in a development team of one, i.e. just myself, does feel a little odd. When I’ve used SVN in the past I’ve not had this sensation. With SVN you have your local copy of your code and you commit to the repository depending on your regime, after a significant change or when you’ve made a change and everything runs correctly (i.e. compiles or whatever). Ok I have to admit that most of time I will make a commit once I’ve made a relatively minor change to my code and I simply want to make sure that I’ve got a backup. Also to ensure that I have a copy of that code that’s tucked away before I make further changes to it and break things so much that I want to return to that previous, working version. Because commits with Git mean that you still have code on your local drive, ok committed to the repo but still being held locally, the question in my mind is when do you make that push to the server?

iPad

I write this blog post on my ‘new’ iPad. It wasn’t until I was looking at my options in the store that I realised that the new iPad was actually called the ‘New’ iPad and not the iPad 3. What’s the next iPad going to be called? The New, New iPad? Anyway I’m sure that question’s been asked before by better people than myself.

Typing on the iPad keypad isn’t actually as bad as I thought it would either.

Anyway. I actually purchased a Google Nexus 7 from eBuyer some weeks ago. I didn’t want to pay the price of an iPad and had the thought that an Android driven tablet would be better from a mobile platform development point of view. After less than an hour of using my Nexus 7 I discovered that the screen was separating from the main body of the device at the left-hand edge. After a little searching online I discovered that this is a relatively common problem. I umm-ed and ahhh-ed about this for a while, trying to decide if I should live with it or whether to return it. I decided that the problem could get worse and so decided to return it and get a refund (predicting that I had a unit that was part of a dodgy batch and that any replacement would come from the same batch).

After spending way too long in an attempt to return the device to eBuyer and finally getting a refund (must check my bank account) I decided to purchase the same device from a walk-in store, namely John Lewis as they have an excellent returns policy (in case I had the same problem as with the previous unit). This I attempted this weekend. Unfortunately I could not find a 16GB version of the Nexus, finding only an 8GB version; this I found in a PC World store as John Lewis had no stock. This led me to have a re-think and consider the iPad again.

Well, it’s performing well so far and I have no complaints. Apart from the fact that my partner uses it more than I would like!

Posted in Mobile | Comments Off

Ellipse Class for KineticJS

I’ve been taking a look at the KineticJS library in recent weeks. I wanted to draw an ellipse, rather than a circle and did a little Googling before discovering a solution.

The solution, once I saw it, was a lot more obvious than I was expecting. I decided to extend the Kinetic.js Shape object with an Ellipse object as detailed in the following code block:

///////////////////////////////////////////////////////////////////////
//  Ellipse
///////////////////////////////////////////////////////////////////////
/**
 * Ellipse constructor
 * @constructor
 * @augments Kinetic.Shape
 * @param {Object} config
 */
Kinetic.Ellipse = function(config) {
    this.setDefaultAttrs({
        width: 0,
        height: 0
    });

    this.shapeType = "Ellipse";

    config.drawFunc = function() {
        var canvas = this.getCanvas();
        var context = this.getContext();

		var w = this.attrs.width / 2;
		var h = this.attrs.height / 2;
		var xPos, yPos;

		context.beginPath();
		this.applyLineJoin();

		for (var i = 0 * Math.PI; i < 2 * Math.PI; i += 0.1 ) { // was 0.01 but slow with lots of circles

		    xPos = (0 + (this.attrs.width / 2)) - (h * Math.sin(i)) * Math.sin(0 * Math.PI) + (w * Math.cos(i)) * Math.cos(0 * Math.PI);
		    yPos = (0 + (this.attrs.height / 2)) + (w * Math.cos(i)) * Math.sin(0 * Math.PI) + (h * Math.sin(i)) * Math.cos(0 * Math.PI);

		    if (i == 0) {
		        context.moveTo(xPos, yPos);
		    } else {
		        context.lineTo(xPos, yPos);
		    }
		}
		context.closePath();
		this.fillStroke();
    };
    // call super constructor
    Kinetic.Shape.apply(this, [config]);
};
/*
 * Ellipse methods
 */
Kinetic.Ellipse.prototype = {
    /**
     * set width
     * @param {Number} value
     */
    setWidth: function(value) {
        this.attrs.width = value;
    },
    /**
     * get width
     */
    getWidth: function() {
        return this.attrs.width;
    },
    /**
     * set height
     * @param {Number} value
     */
    setHeight: function(value) {
        this.attrs.height = value;
    },
    /**
     * get width
     */
    getHeight: function() {
        return this.attrs.height;
    }
};

// extend Shape
Kinetic.GlobalObject.extend(Kinetic.Ellipse, Kinetic.Shape);

From the Archive: Gamification

Here’s one of my old posts from my older Ruby on Rails based blog site. I was rummaging around in my old archive and dug this one up…

Gamification has been a buzzword on the net for some time now. Can’t remember when I first heard the term used. May have been when listening to the Think Vitamin podcast; think that it was this episode.

I was also listening to the TechStuff podcast last weekend and they had a podcast dedicated to the topic. Whilst listening to this podcast last weekend I was kind of inspired, which is odd as I was carrying out the weekly chore of ironing at the time!

I jotted down some thoughts and ideas at the time. However I feel that it’s one of those occasions where you have what seems to be a great idea at the time and then turns out to be a really crappy idea when you wake-up the next morning. I thought of using gamification principles to drive traffic to my personal site and to keep people coming back. Seemed a good idea at the time but now seems a rather lame idea.

In essence my thoughts were to give readers of my blog ‘points’ if they followed me on Twitter, submitted a genuine comment, followed a particular link, FaceBook ‘liked’ a particular post, etc. These points would either be displayed on the site in the form of a leaderboard or by using a badge system. I even thought that I might be able to get a few media types who I know to produce some nice badges for me. But, as I say, the next day, in the cold light of day it seemed a rather daft idea.

Stupid idea or not, it’s not stopped me from carrying out a little research about gamification every time I go online. Found a rather interesting post about gamification at the UX Magazine site.

Can’t remember how I came across the BigDoor site, whether it was via the TechStuff podcast or on one blog post or other. It seems like a very fast way of adding gamification elements to a web site but my concern is where is all of the data that’s being collected actually stored and how is it used? I know that if I create my own server-side code to support gamification on my site any user data will be stored in my backend database and it won’t be sold to any third party. I don’t have that same feeling about using someone else’s services.

Anyway, I”ve rambled on for long enough. My research into gamification will continue, be it the ethics behind it or the technical implementation. I will surely blog about it in the future.

*UPDATE* Just wanted to mention the Practical Ethics post on the ethics of gamification.

Change of Appearance

I’ve finally updated the theme of my site so that it looks very much like my old site that was written with Ruby on Rails. I have a number of issues with it but it’ll do for the time being.

I’ve plans to change the styling again but it can wait.

Three.js: Very Basic Animation

My previous post described how I created a simplistic 3D model with Blender and how I imported that into a three.js 3D scene. In this post I look at how a Blender model can be loaded and animated with the aid of the three.js library. Oh, it seems a rather long post too. Give yourself a treat if you manage to read all of the way through!

Work in Blender

To begin I returned to Blender and created a new 3D model. A model that was even more simplistic than my previous robot-type model. I wanted to produce something very simple that I could animate easily. I didn’t want to worry about having to add bones to my 3D model or anything like that. In the end I essentially created a box with some C-shaped ‘feet’ and I animated the movement of the feet and the box ‘body’ in Blender to give the illusion of, erm, a walking box I guess! An example of the final result can be seen here. Only works in WebGL-enabled browsers by the way.

As I write this post I’m trying to recall where I located the Blender Python script file that allowed me to export a javascript (.js) file from Blender. I’m fairly sure that I found it amongst the files that I’d downloaded from the three.js github repository, located down in the utils/exporters/blender path somewhere.

In order to export my model from Blender I used the Export > Three.js (.js) option from the Blender File menu; thus calling upon the Python export script that I’d previously installed.

Export option in Blender

Export option in Blender

From the Blender export window I then selected the options shown in the next image.

Blender Export Options

Blender Export Options

This gave me a .js file that contained the definition of my simple model.

I  want to briefly make a reference to the site of Kadrmas Concepts which is where I found a nice tutorial on how to use bones in Blender. I didn’t actually use bones in my simple model in the end but this post about modelling and exporting was to the point.

I also found the superb THREE Fab tool at the same time. I think that I mentioned this in my previous post; it’s a great way of importing models and just playing with primitives and lighting. I managed to solve some minor issues that I had with my basic animation by using this tool. It’s nice and easy to just drag and drop the exported .js file onto the window in the browser and view (and animate) your model.

Onto the interesting bit…

The Code

Again I’ve used the same code set-up that I’ve used and discussed previously in this series of blog posts. Therefore I’m not going to explain all of the details again such as how the require.js stuff works or setting-up of the basic 3D scene.

Here’s the first chunk of javascript.

require(['96methods/BotCharacter', 'libraries/RequestAnimationFrame', 'libraries/Three', 'jquery'], function(Character) {

	var camera, stage, renderer;
	var character = new Character('./models/robot02_01_feet.js');

	// Initialise and then animate the 3D scene!
	init();
	animate();

	function init() {

		// Begin loading the character model:
		character.load();

		// Instantiate the 3D scene:
		stage = new THREE.Scene();

		// Instantiate an Orthographic camera this time.
		// The Left/Right/Top/Bottom values seem to be relative to the scene's 0, 0, 0 origin.
		// The best result seems to come if the overall viewable area is divided in 2 and
		// the Left & Bottom values set to negative
		camera = new THREE.OrthographicCamera(
			window.innerWidth / -2, 	// Left
			window.innerWidth / 2,		// Right
			window.innerHeight / 2,		// Top
			window.innerHeight / -2,	// Bottom
			-2000,						// Near clipping plane
			1000 );						// Far clipping plane

		// Set the camera position:
		camera.position.y = 100;
		camera.position.x = 200;
		camera.position.z = 200;

		camera.lookAt(new THREE.Vector3(0, 0, 0));

		// Add the camera to the scene/stage:
		stage.add(camera);

		// Add some lights to the scene
		var directionalLight = new THREE.DirectionalLight(0xffffff, 1.0);
		directionalLight.position.x = 1;
		directionalLight.position.y = 0;
		directionalLight.position.z = 0;
		stage.add( directionalLight );

		var directionalLight2 = new THREE.DirectionalLight(0xeeeeee, 2.0);
		// A different way to specify the position:
		directionalLight2.position.set(-1, 0, 1);
		stage.add( directionalLight2 );

		// Instantiate the renderer
		renderer = new THREE.WebGLRenderer();
		// .. and set it's size:
		renderer.setSize(window.innerWidth, window.innerHeight);

		// Place the renderer into the HTML (inside the #container div):
		$('#container').append(renderer.domElement);

	}

At line 4 I’m simply calling the constructor of my Character class again as I have in previous examples. This time I’m passing in an argument that specifies the path of my 3D model. The Character class will load this model, when the load() method is invoked, and will control the animation for me; more of that shortly.

Again, at lines 7 & 8, I am initialising the scene and beginning the animation loop. Following this is the definition of the init() method.

Main point of interest here is at line 13 where I call the load() method of my Character class. Think that’s the only main point of interest in the init() function actually!

This leads us to the remainder of the file and the animate() function.

	function animate() {
		// Defined in the RequestAnimationFrame.js file, this function means that the
		// animate function is called upon timeout:
		requestAnimationFrame( animate );

		// Find out if the robot has loaded:
		if(character.hasLoaded()) {
			// Add the character to the stage?
			if(!character.onStage()) {
				character.addToStage(stage);
			}
			// Animate:
			else {
				character.animateCharacter();
			}
		}

		render();

		// Update the character position
		TWEEN.update();
	}

	function render() {

		// *** Update the scene ***
		renderer.render(stage, camera);
	}
});

Lines 68 to 77 contain the key points of interest. Essentially here I’m trying to determine if my 3D model has loaded, if it has loaded have I added it to the stage yet and if it’s already on the stage I then call my animateCharacter() method to update the character animation.

Right, now onto the Character class definition.

define(['libraries/Three', 'libraries/mootools-core-1.4.2'], function() {

	return new Class(function(modelPath) {

		// Private members
		var mesh = null;
		var modelLoader = null;
		var loadedModel = false;

		var onStage = false;

		var animCycleDuration = 1000,
			numKeyframes = 39,
			scaleFactor = 20.5;

		var currentKeyframe,
			lastKeyframe;

		var lastFrameRenderedFlag = false;

		Object.append(this, {
			// Getters/setters
			getMesh: function() { return mesh; },
			setMesh: function(value) { mesh = value; },

			// Load the model
			load: function() {
				// Instantiate the JSON loader:
				modelLoader = new THREE.JSONLoader();

				// Initiate loading of the model and define callback:
				modelLoader.load( modelPath, function ( geometry ) {

					// Create a mesh based upon the loaded geometry:
					mesh = new THREE.Mesh( geometry, new THREE.MeshLambertMaterial( { color: 0xFF6060, morphTargets: true } ) );

					// Scale-up the model so that we can see it:
					mesh.scale.set( scaleFactor, scaleFactor, scaleFactor );

					// Perhaps set flag and the main scene can ask this character if it's loaded?
					loadedModel = true;
				});
			},

			// Checks if the character model has loaded:
			hasLoaded: function() {	return loadedModel; },
			// Checks if the character model has been added to the stage:
			onStage: function() { return onStage; },
			// Adds the character model to the stage:
			addToStage: function(stage) {
				stage.add(mesh);
				onStage = true;
			},

			animateCharacter: function() {

				// Calculate interpolation - how long a single frame is shown for:
				var interpolation = animCycleDuration / numKeyframes;

				// Determine the current frame (keyframe) by calculating how much
				// more time of our animation cycle remains to be played thru.
				var time = Date.now() % animCycleDuration;

				var keyframe = Math.floor( time / interpolation ) + 1;

				// Update the frame details if the keyframe just calculated is different to the
				// current keyframe:
				if ( keyframe != currentKeyframe ) {

					// Update the morphTargetInfluences array to progress the animation cycle:
					mesh.morphTargetInfluences[ lastKeyframe ] = 0;
					mesh.morphTargetInfluences[ currentKeyframe ] = 1;
					mesh.morphTargetInfluences[ keyframe ] = 0;

					// Track previous/last keyframe and the current keyframe:
					lastKeyframe = currentKeyframe;
					currentKeyframe = keyframe;

					// Not 100% sure about this little bit. I think it helps for a smoother animation,
					// especially when animation duration is long, according to the catchvar.com site!
					// However I have found that it screws up the my little bot animation so I've taken it out.
					// Essentially the movement is very jerky when it comes to the forward movement of the whole mesh.
					//mesh.morphTargetInfluences[ keyframe ] = ( time % interpolation ) / interpolation;
					//mesh.morphTargetInfluences[ lastKeyframe ] = 1 - mesh.morphTargetInfluences[ keyframe ];

					// Determine if the character mesh should be moved forward.
					// The character mesh should be moved once the animation cycle has completed,
					// to provide the suggestion of continuous walking.

					// Has the final frame of the cycle been rendered and, thus, we are now going
					// to render the first frame in the cycle?
					if(lastFrameRenderedFlag) {
						// Clear the flag:
						lastFrameRenderedFlag = false;
						// The character in the Blender animation moves forward by 2 units during a complete
						// animation cycle, so want to move character by that amount
						// and factor in the scaling-up, hence multiplying scale factor by 2:
						mesh.position.x = mesh.position.x + (2* scaleFactor);
					}
					// The keyframe will be < the lastKeyframe when the last frame
					// of the animation cycle is being rendered, i.e. when the final
					// frame is reached:
					if(keyframe < lastKeyframe)
					{
						// Flag that the last frame has been reached:
						lastFrameRenderedFlag=true;
					}
				}
			}
		}); // End of Object.append
	});
});

I apologise as I have rather dumped a lot of code there. I’ve also written a lot of comments in that block of code.

The first interesting bit is on lines 12 to 14 where I’ve declared some local variables. Here they are again:

var animCycleDuration = 1000,
			numKeyframes = 39,
			scaleFactor = 20.5;

My animation in Blender runs for a second, so that’s where the 1000 came from, i.e. 1000 milliseconds. I have a total of 40 frames in my animation and I probably should have named my numKeyframes variable something like lastKeyframe as the frame numbering goes from 0 to 39. I’m scaling-up my model as it can barely be seen otherwise.

Next comes the load() method that we saw being called earlier in the init() function:

// Load the model
load: function() {
	// Instantiate the JSON loader:
	modelLoader = new THREE.JSONLoader();

	// Initiate loading of the model and define callback:
	modelLoader.load( modelPath, function ( geometry ) {

		// Create a mesh based upon the loaded geometry:
		mesh = new THREE.Mesh( geometry, new THREE.MeshLambertMaterial( { color: 0xFF6060, morphTargets: true } ) );

		// Scale-up the model so that we can see it:
		mesh.scale.set( scaleFactor, scaleFactor, scaleFactor );

		// Perhaps set flag and the main scene can ask this character if it's loaded?
		loadedModel = true;
	});
},

I am using the JSONLoader class to load my 3D model. At line 32 I am calling the load() method and specifying a callback function that will be invoked once the model has loaded. Once loaded I define a mesh using the loaded model geometry, scale it up and set a flag to show that the model has loaded (this flag is used by the animate() function in my main block of javascript code to determine if the model is loaded).

Following this are the definitions of some methods used by the main javascript code to determine if the model is loaded, if it’s been added to the scene (or stage as I’ve called it) and a method used to add the loaded model to the stage.

Next comes the animateCharacter() method. Note I’ve reduced some of comments in the following section to make it a little easier to follow.

animateCharacter: function() {

	// Calculate interpolation - how long a single frame is shown for:
	var interpolation = animCycleDuration / numKeyframes;

	// Determine the current frame (keyframe) by calculating how much
	// more time of our animation cycle remains to be played thru.
	var time = Date.now() % animCycleDuration;

	var keyframe = Math.floor( time / interpolation ) + 1;

	// Update the frame details if the keyframe just calculated is different to the
	// current keyframe:
	if ( keyframe != currentKeyframe ) {

		// Update the morphTargetInfluences array to progress the animation cycle:
		mesh.morphTargetInfluences[ lastKeyframe ] = 0;
		mesh.morphTargetInfluences[ currentKeyframe ] = 1;
		mesh.morphTargetInfluences[ keyframe ] = 0;

		// Track previous/last keyframe and the current keyframe:
		lastKeyframe = currentKeyframe;
		currentKeyframe = keyframe;

		// Determine if the character mesh should be moved forward.
		// The character mesh should be moved once the animation cycle has completed,
		// to provide the suggestion of continuous walking.

		// Has the final frame of the cycle been rendered and, thus, we are now going
		// to render the first frame in the cycle?
		if(lastFrameRenderedFlag) {
			// Clear the flag:
			lastFrameRenderedFlag = false;
			// The character in the Blender animation moves forward by 2 units during a complete
			// animation cycle, so want to move character by that amount
			// and factor in the scaling-up, hence multiplying scale factor by 2:
			mesh.position.x = mesh.position.x + (2* scaleFactor);
		}
		// The keyframe will be < the lastKeyframe when the last frame
		// of the animation cycle is being rendered, i.e. when the final
		// frame is reached:
		if(keyframe < lastKeyframe) {
			// Flag that the last frame has been reached:
			lastFrameRenderedFlag=true;
		}
	}
}

At lines 58 to 64 I am essentially calculating which of my 40 keyframes I should be displaying.

The statement at line 68 is determining if there has been a change in the keyframe to be rendered since the last time this method was called. If so, the morphTargetInfluences array is updated so that the array cell, that refers to the frame that needs to be shown, is set to 1. Bearing in mind that currentKeyframe always refers to the frame being rendered on that pass. Next lines 76 and 77 store the currentKeyframe and keyframe ready for the next time around.

The code described thus far achieves the animation of our model; cycling through the frames of our animation and looping back once we have completed one full cycle of animation, i.e. once we’ve rendered all 40 frames of animation. The remainder of the code in the animateCharacter() method moves the whole model forwards a little bit each time we complete one cycle of animation. The result is that the character appears to walk slowly forwards.

The condition at line 85 determines if the lastFrameRenderedFlag has been set and, if it has, resets the flag and moves the model mesh forward by 2 units (the character model in Blender was 2 units in size) multiplied by the scale factor that was used to enlarge our model in the first case. The condition that follows this on line 96 determines if we are about to render the last frame in the animation cycle and, if so, sets the lastFrameRenderedFlag so that the mesh movement, as just described, can take place on the next time through the render sequence.

Conclusions

The working example can be found here. It’s a bit ugly but this is a work in progress. Also remember that it only works in WebGL-enabled browsers.

I’ve realised that the material applied to my model doesn’t seem to be showing up correctly. Pretty sure, upon editing this post, that it’s because of the mesh definition at line 35 of my BotCharacter.js code.

Whilst writing-up this post I discovered that my character model had the wrong pivot point, or centre of rotation. I was going to amend my example code so that the character walked in a circle by rotating the mesh each time an animation cycle was complete, but this was when I found that the character did not rotate as intended. I went back to the THREE Fab site and dropped-in my model. When rotating around the y-axis I saw the weird rotation again. Going back to take a look in Blender I recalled that the pivot point was out. This is something I’ll need to look into in the future. It raises a number of concerns that I have about how I’m achieving the character animation and if there’s a much better way of doing it. I realised that the centre-point of my 3D scene is the point around which the animating character will rotate. I have no idea at the moment if this centre-point can be moved while the character is animating or not. If anyone reading this has any thoughts on the topic and would like to enlighten me then please comment.

Three.js: Importing a Model

This post discusses a small piece of code that uses the three.js library to load a 3D collada scene. Upon starting to write this blog post I thought I’d take a look at some of the problems I’d encountered on the way. To my surprise I realised that I probably had enough information for two posts rather than just one.

This post will look at the exporting of a 3D model from Blender to be used with three.js. I’ll create another post to look at how I then managed to produce a 3D model in Blender and animate it with three.js.

Blender

I had first used Blender several years ago, I even bought one of those ‘For Dummies’ books, but the Blender interface has changed since I last used the software and viewing a video seemed a nice quick way of diving back into it. Therefore I looked at a few Blender tutorials initially to get my head into the application.

After some hours playing around with what I’d seen in the video tutorials I managed to make an extremely crude-looking robot. When I say crude I don’t mean that it featured any phallic appendages! It was simply a very simplistic model.

In order to export my robot model from Blender I used the Export > COLLADA (.dae) option from the Blender File menu. This generated a Collada .dae file for me. At a later point in my experimentation with Blender I attempted to produce a very simplistic animation within Blender and to export this animation. Well this didn’t work terribly well until I installed a script for Blender that allowed me to export my model as a Three.js (.js) file. But more on that in my next post.

One of the first problems that I encountered when initially loading my robot model into the three.js scene was that of weirdly rotated objects. Essentially, objects that I had rotated in Blender in order to create my robot model were appearing non-rotated when imported into my scene. I went so far as to put a question up on the StackOverflow site but then, after investigation, managed to answer my own question. Well, ok I didn’t fix the problem myself but thanks again to the Mr. Doob github area I found the answer. I downloaded the three.js library from the ‘dev’ branch rather than the ‘master’ branch on github and my immediate problems were solved.

In order to be able to export my model from Blender this post came in handy; again on the Mr. Doob github pages. Although this only came into use when I began investigating animation of loaded models.

The Source

Anyway the code is as follows. I’ve based much of this code on my previous examples so some elements, such as my use of require() at the top , may make more sense if you read some of my earlier posts about my experimentation with three.js. I’ve only included the javascript source here.

I think that the code, with its comments, is fairly self explanatory.

require(['libraries/RequestAnimationFrame', 'libraries/Three', 'jquery'], function() {

	var camera, scene, renderer;
	var dae;

	// Create an instance of the collada loader:
	var loader = new THREE.ColladaLoader();

	// Need to convert the axes so that our model does not stand upside-down:
	loader.options.convertUpAxis = true;

	// Load the 3D collada file (robot01.dae in my example), and specify
	// the callback function that is called once the model has loaded:
	loader.load( './models/robot01.dae', function ( collada ) {

		// Grab the collada scene data:
		dae = collada.scene;

		// No skin applied to my model so no need for the following:
		// var skin = collada.skins[ 0 ];

		// Scale-up the model so that we can see it:
		dae.scale.x = dae.scale.y = dae.scale.z = 25.0;

		// Initialise and then animate the 3D scene
		// since we have now successfully loaded the model:
		init();
		animate();
	});

	function init() {

		// Instantiate the 3D scene:
		scene = new THREE.Scene();

		// Instantiate an Orthographic camera this time.
		// The Left/Right/Top/Bottom values seem to be relative to the scene's 0, 0, 0 origin.
		// The best result seems to come if the overall viewable area is divided in 2 and
		// the Left & Bottom values set to negative
		camera = new THREE.OrthographicCamera(
			window.innerWidth / -2, 	// Left
			window.innerWidth / 2,		// Right
			window.innerHeight / 2,		// Top
			window.innerHeight / -2,	// Bottom
			-2000,						// Near clipping plane
			1000 );						// Far clipping plane

		// Set the camera position so that it's up top and looking down:
		camera.position.y = 100;

		// Rotate around the x-axis by -45 degrees:
		camera.rotation.x -= 45 * (Math.PI/ 180);

		// Add the camera to the scene:
		scene.add(camera);

		// Add some lights to the scene
		var directionalLight = new THREE.DirectionalLight(0xeeeeee , 1.0);
		directionalLight.position.x = 1;
		directionalLight.position.y = 0;
		directionalLight.position.z = 0;
		scene.add( directionalLight );

		var directionalLight2 = new THREE.DirectionalLight(0xeeeeee, 2.0);
		// A different way to specify the position:
		directionalLight2.position.set(-1, 0, 1);
		scene.add( directionalLight2 );

		// Add the loaded model to the scene:
		scene.add(dae);

		// Instantiate the renderer
		renderer = new THREE.WebGLRenderer();
		// .. and set it's size:
		renderer.setSize(window.innerWidth, window.innerHeight);

		// Place the renderer into the HTML (inside the #container div):
		$('#container').append(renderer.domElement);
	}

	function animate() {
		// Defined in the RequestAnimationFrame.js file, this function
		// means that the animate function is called upon timeout:
		requestAnimationFrame( animate );

		render();
	}

	function render() {
		// *** Update the scene ***
		renderer.render(scene, camera);
	}
});