Set Up Constant Speed in box2d

This brief tutorial will show you how to set up dynamic objects with a constant speed in box2d. It assumes you have a basic understanding of how to set up a box2d project with cocos2d.

As you are no doubt aware, the box2d physics engine is a wonderful tool for creating a virtual world filled with objects that interact in a way analagous to the real world. In this world gravity, friction (including rotational friction), inertia and momentum are all simulated.

But what of the concept of cruise control? You know, you set a speed and then the target continues to try to match that speed. You might control the vector direction in some other method, but the speed is intended to remain constant. How do you do that? That’s what we’re going to take a look out now.

Conceptually, what we want to do is measure our current speed on each update cycle and then fire an impulse of the appropriate size and direction in order to nudge us up to (or down to) speed. What we specifically do not want to do is simply call SetLinearVelocity(). Why not, you may ask. The problem is that doing so essentially tells the box2d engine “Hey, ignore whatever you *think* should be happening to that body. Here’s the actual velocity.” Instead, what we want to do is tell the box2d engine, “See that body over there? I want you to add this new impulse to it and factor that in along with everything else.” This lets the box2d engine take the entire model and any ongoing interactions into account rather than dropping everything and running with the new values.

Because I’m taking code out of my game Centripetal, I don’t have a full project with a demo set up to show you what I’m talking about. But I will pull out the pertinent bits and provide some illumination on what I’m doing.

Before we get started, remember that box2d is a physics simulator only. It does not display graphics. b2Body objects do have a UserData attribute which is a void* and which can therefore store a pointer to, for example, a CCSprite. Likewise, you can also create a CCSprite subclass which has a b2Body* member and thus the two could refer to one another. I will leave the pointer management concerns to you based on your own implementation.

In my case, I have a CCNode subclass which has member pointers to both the b2World and CCSprite objects.

@interface BodyNode : CCNode
{
	...
	b2Body* body;
	CCSprite* sprite;
	...
}
@property (readonly, nonatomic) b2Body* body;
@property (readonly, nonatomic) CCSprite* sprite;
...
@end

There’s more to it, but that’s enough to get us going here. Now let’s focus on our CruiseControl object. We’re going to subclass BodyNode for this and add a little to it:

@interface CruiseControl : BodyNode
{
	...
	float speed;
	...
}
@property (nonatomic) float speed;
...
@end

We’ve got a BodyNode subclass to which we have added a speed member. Why speed? Why not a b2Vec? We don’t want a steady direction, just a steady rate of movement. We want the box2d engine to bounce us around and change our direction but we want to know just how fast we should be moving and try to nudge ourselves just enough, but in the current direction, in order to achieve that. Let’s so how we do it:

-(id) init
{
	if ((self = [super init]))
	{
		...
		[self scheduleUpdate];
	}
	return self;
}

Okay, the first thing you’ll see is that, among other things in init, I’m scheduling an update callback. This doesn’t have to happen in init, but it’s a convenient place to do so. Note that this is a subclass of BodyNode which has a pointer to the CCSprite we will ultimately be moving around in cocos2d. The update will occur on our BodyNode subclass and not directly on the CCSprite we contain.

-(void) update:(ccTime)delta
{
	b2Vec2 curvel = body->GetLinearVelocity();
	if (curvel.Length() < self.speed || curvel.Length() > self.speed + 0.25f)
	{
		float curspeed = curvel.Normalize();
		float velChange = self.speed - curspeed;
		float impulse = body->GetMass() * velChange;
		curvel *= impulse;
		body->ApplyLinearImpulse(curvel, body->GetPosition());
	}
}

There may be more going on inside your update method (there is in mine in fact), but what you see here is the nuts and bolts of the cruise control concept. We first retrieve the current velocity which is a vector with scale equal to current speed. We check that speed against our desired speed. If it is too low or if it is too high, we want to apply an impulse.

Note that I have a bit of a fudge factor. You are dealing with floating point numbers and the usual lack of precision that entails. You can play with your fudge factor as you like. Maybe you’re okay with being a little slower but no faster. Maybe you don’t mind a little wiggle room in either direction. You can alter that to your heart’s content.

So if we need to apply an impulse, we first normalize our current vector of movement. That gives us the direction with a factor of 1.0f which conveniently allows us to reuse the vector by multiplying it by our speed to get what we need. We calculate our speed by simply subtracing the current speed from our desired speed. Note that if we are moving too fast, this gives us a negative value. This is important in the next step as we multiply by the body’s mass to get the needed scale to apply to the vector to give us our new direction. In the case of excessive speed, this becomes a negative value which reverses the vector for purposes of applying the impulse. Finally, we apply the impulse to our body at its location, allowing the physics engine to nudge us enough to get us back to the correct level of speed.

Naturally, you can play around with this as much as you like. You can alter the scheduled update to call whichever method you prefer. You can alter the frequency of the scheduled callback too. Or if you prefer, you could conceivably eliminate the update callback on your BodyNode subclass by using the box2d processing loop to watch for your CruiseControl object and perform your check at that time. Regardless, you now have a simple method of setting up cruise control for your box2d objects.

An additional note concerning gravity: When developing Centripetal, I set the simulator up with no gravity as I was simulating a top down view of a frictionless surface. I didn’t need gravity. The problem you will face when adding gravity is that if the gravity is intense enough compared to your desired speed, even with constant impulses to push the object along it won’t be enough to counteract the gravitational pull between steps. So your object might end up slinking around on the bottom of your simulation view rather than moving about freely. If the gravity is low enough relative to the desired speed, then the steady stream of impulses coming each step should be enough to let you fly.

iPad Competition That May Stand a Chance

Techcrunch got their hands on a test version of the new Kindle and based on their report, it seems to give a glimpse of how a worthy competitor to the iPad might be fashioned.

Taken as a whole, it’s like most any other Android tablet. The form factor is an improvement and we’ll have to wait to see what the battery life is like. Let’s just say that much of the raw capability of the device will remain the same as any other Android tablet on offer. So what makes up the difference? Spit and polish plus price point.

First, consider Mac hardware in general, laptops and desktops. Even displays. They are made primarily of common components that any manufacturer can get ahold of. There’s no secrets here. That’s not to say there aren’t some serious hardware design chops being put to work to make that hardware hum, but in terms of the overall capabilities of the units in question, you can find similar quality from many other vendors if you’re willing to look for it. It’s when you boot it up that you see a huge difference. OS X has Apple stamped all over it. It’s a very consistent experience and one that Apple takes great pains to maintain.

Likewise Amazon is putting their stamp on the Android tablet experience with this newest Kindle. You’ll apparently be getting their look and feel, their color scheme (by default anyway) as well as their app store (again, by default). They’ve even taken their version of Android and run with rather than trying to stay up with the latest updates from Google. Essentially it appears they have forked their own copy of Android, tweaking it to maximize its effectiveness on their own hardware. That’s well and good, but lots of vendors do this. Or at least put their own mark on it. The difference here is going to be in execution and while it remains to be seen how effective Amazon can really be at customizing the Android UI, they have the advantage that their device is being sold to customers with the express purpose of linking them to the Kindle reader and Kindle store. In essence, you’re buying the device specifically because you anticipate using it with Amazon’s services. So they will be more free to integrate their services into the end product without customers complaining that they can’t remove the apps. And that’s going to be one big difference. Other Android vendors have tended to go the same route as PC vendors have, shoveling unwanted and unneeded applications onto the device in order to push customers toward additional purchases or as part of relationships with other vendors. Here, Amazon is the only vendor in question and the customers are buying the device because they want Amazon’s services.

The other thing that will help this be more competitive with the iPad is the price point. It’s low. It’s not HP TouchPad low, but at $299 it’s below even entry level iPad prices. Plus, unlike previous Kindle devices, it’s intended to be a fully functional tablet, not merely an e-Reader. Even if Amazon is selling at or just below cost, they are no doubt expecting to make it up with additional revenue down the road from new Kindle book sales. And this is the secret sauce for the price point. HP had no plan beyond selling the hardware. Sure, they would have loved to have leveraged those TouchPad sales into additional software sales down the road, but the fact is HP is not an Android developer. They don’t have anything that the typical customer links to tablet software. So the ridiculously low price HP is offering their units for is unsustainable in the long run. Amazon does have that software in addition to their own Android app store. It remains to be seen how popular their app store will be with developers and purchasing customers, but it’s definitely a plus. Gravy, really, since Amazon is going to be primarily counting on Kindle sales, not app store revenue, to sustain Kindle purchases.

Microsoft Is Watching You

The Guardian is reporting that a lawsuit was filed last Wednesday claiming Microsoft is tracking users of Windows Phone 7 devices even in situations when location information was purportedly disabled. In the article, and in the ensuing discussion about the case, Apple’s name was inevitably dragged into the fray, focusing on the hubbub that was brought forth in April concerning the ‘consolidated.db’ file which stored timestamped latitude/longitude values, sometimes as far back as a year. As Josh Halliday at The Guardian puts it:

The lawsuit follows mounting concern about how technology giants, including Apple and Google, record users’ private data. Microsoft, Nokia, Apple and Google were called before the US Congress in April to explain their privacy policies after security researchers uncovered hidden location-tracking software in iPhones. Google Android phones were subsequently found to gather location data, but required users’ explicit permission.

There’s nothing inherently flawed with the quote above. Yes, there was concern about the possibility of tracking by several large companies. Yes the aforementioned companies were called before Congress. But no further mention is made of how Apple closed things out. And I imagine things will be a bit different with Microsoft.

To begin with, Microsoft’s declaration in their letter to Congress reads similarly to Apple’s press release with regard to what each company states they collect. Essentially they both claim to only track approximate location in order to provide a better user experience. In both cases, a small portion of the entire database of known Wi-Fi and cell tower locations is sent to the phone in order to be prepared to quickly obtain a more precise GPS based location on demand. Both companies also state that they honor the disabling of location services by disallowing the dissemination of this information to apps on the device which make a location request.

The differences begin with how the outcry started in each case. For Apple, the existence of the database had long been known by those technically savvy enough to snoop around the iPhone’s internals and figure out what they were looking at. It wasn’t until Alasdair Allan and Pete Warden revealed an open source utility to fetch the database for your viewing pleasure that things were sent into damage control. Shortly thereafter, Apple issued their press release which stated, among other things:

7. When I turn off Location Services, why does my iPhone sometimes continue updating its Wi-Fi and cell tower data from Apple’s crowd-sourced database?  
It shouldn’t. This is a bug, which we plan to fix shortly (see Software Update section below).

It further added:

Software Update 
Sometime in the next few weeks Apple will release a free iOS software update that:

    • reduces the size of the crowd-sourced Wi-Fi hotspot and cell tower database cached on the iPhone,
    • ceases backing up this cache, and
    • deletes this cache entirely when Location Services is turned off.

In the next major iOS software release the cache will also be encrypted on the iPhone.

That was it for Apple. They would issue a free update that would cease even grabbing the cached data if you disabled location tracking, reduce the amount of cached data retained, cease backing it up if you did retain it, and delete the cache entirely if you disabled location tracking. Moreover, the next major iOS release encrypted it locally when it was stored. There were never any accusations of tomfoolery on Apple’s part.

In Microsoft’s case, the first sounding of the gong is the result of a lawsuit filed in Microsoft’s own backyard so to speak. Not simply an indication of something a techie found that was subsequently addressed but rather someone essentially throwing down with them. Of course, frivolous lawsuit are filed all the time, but I don’t see any advantage to be had here unless there is some truth to it. Even so, it’s an interesting distinction in terms of how the starting gun sounded.

So now we’re waiting to hear from Microsoft, to get their side of the story. Apple took 7 days to complete their response, and I imagine some of that time was spent with engineering, looking for the bug they spoke of. There was, I’m sure, time spent mulling over release dates, etc. We’re still within the same 7 day mark for Microsoft’s response, and they have at least indicated they will be responding though I figure that was a given. I wonder if they’ll admit it was a problem and indicate how they’ll be fixing it, or if they’ll take a more defensive posture. I’m guessing the latter. Regardless, I’ll be getting the popcorn and pulling up a chair. This ought to be interesting.

Virtual Goods Sales Evil?

If you play a game online these days, there is a more than even chance that you can obtain “things” in the game which can be found, earned, bought with in-game currency or traded for. Because any of these methods involve an investment of time, you, like many others, have probably wondered if perhaps you could pay someone else to just give you the “thing” in question. For many games the “thing” is in game currency. World of Warcraft players have many opportunities to purchase gold for their characters in the game, allowing them to participate actively in the auction houses online and easily afford some of the game’s more extravagant purchases like special mounts or training.

Second Life, an online virtual world where players can create content, build up virtual real estate and freely buy and sell virtual good between one another, actually has had a fairly robust economy as recently as 2011 Q2. League of Legends uses what is known as a freemium model. The core service is free while certain content, in the form of upgrades or reskins, is available through purchase. The list goes on; game developers have quickly latched onto the idea that they can charge customers for the flip of an electronic bit.

While there are many who think this is perfectly acceptable and even encourage the development of this model, there is a growing number of people who reject the idea of paying for virtual goods. They seem to accept that some games will require a subscription fee for ongoing access to the game content itself, but believe that it is unreasonable or illogical to purchase in game content with real money. Typically it comes down to possession of real property. If I buy a book, I have physical possession of that book. I can read it, shelf it, burn it, throw it at a burglar or build a shrine to it. Whatever I choose to do with it, it doesn’t matter whether the company that sold me the book, or the book’s author for that matter, are still around. Nothing prevents me from continuing to possess the book.

Things change when you buy a virtual item. Since World of Warcraft came onto the scene, it has seemed to be an unstoppable juggernaut in the MMORPG space, chalking up record breaking subscription numbers year after year. It has finally seen a decline in those numbers which may or may not be indicative of the eventual fall of the iconic franchise’s tenure at the top of the food chain. Regardless though, it seems commonly held wisdom that it too will eventually succumb to something that comes after, even if it is another Blizzard MMO. And when the servers for WoW are finally shut down, that epic Sword of Truthiness will no longer be available. And therein lies the argument of the naysayers opposing virtual good purchases. You aren’t buying anything tangible. The company might close its doors with little to no notice and that shiny electronic loot you had is now gone along with any characters you might have built up.

The problem with this thinking is being locked into the idea that virtual goods are like their real world counterparts. They are quite different and should be treated as such. This doesn’t mean that virtual goods have no value, just that we can’t expect to be able to treat them like we would something we can go down and buy at the furniture store. If we start with the mindset that in fact the two are different, with virtual goods having the greater risk of possibly disappearing in spite of our best efforts, then we begin to frame the discussion more reasonably.

Even so, the thinking goes, why pay permanent real money for something that is here today and gone tomorrow. To which I would respond by asking, “Did you enjoy your last movie?” Where is that movie now? Did you frame it and put it up in your living room? Did it drive you to the grocery store yesterday? Are you wearing it? Of course not. You paid for an experience. You paid to see the result of someone else’s work. How is this different from purchasing some item inside an online game? An artist created the model. A developer wrote the code. You are buying the experience. It is entertainment. For now at least. Perhaps some day we will conduct business in a virtual world like Second Life. Hold meetings, perform negotiations, interview prospective employees. When that happens, we may want to be able to build a virtual corporate office with virtual furniture. And we will likely pay for the privilege. In such a case, we still want that experience. We are decorating our world, virtual or otherwise, with the works of others. That is what you are paying for, and that is really no different than buying a book. The book, the physical pages, are simply a medium. It presents perhaps greater value because it is a flexible medium that won’t just disappear, though of course it is not without its disadvantages like requiring physical space and having mass which becomes a problem when you are moving. But what you are really buying in that case is the story or the information contained within the book. You could have gotten it as an e-book, or perhaps on CD. Same story. Same content. Different medium.

But what about account theft? It is true that in some cases, perhaps many depending on the virtual world in question, account theft is the source of virtual goods sold. Someone’s account is hacked, their currency transferred to another temporary account who then transfers the gold to one or more other handling accounts and is then itself deleted. Quickly, a buyer is found, pays for the in game currency, the real world money changes hands, the bandits then send along the stolen currency and move to the next “customer”. It is possible for such transactions to then be rolled back, but the real world currency exchange is already completed, often with little or no recourse to the person who paid. This is, however, symptomatic of other problems. There will always be those who try to game the system, real world or otherwise, taking advantage of their ability to lure unsuspecting victims into giving up their money. In many cases these people didn’t even realize that what they were doing was wrong or that they were dealing with criminals. The problem is not that virtual good transactions are responsible for these thefts. It is that these thefts are used as part of virtual good transactions. As these virtual worlds move toward more secure operating models or even handle the transactions themselves rather than leaving it up to some sort of black or gray market, the chances of theft will be drastically reduced. When the operators of these virtual worlds take an active hand in running this market, they can also avoid inflationary effects which ofttimes occur as a result of currency purchasing.

All in all, there isn’t really anything inherently wrong with purchasing or selling virtual goods with real world money. The philosophical opposition to the practice is rooted in mistaking physical and virtual attributes for one another and how things were done early on in the burgeoning marketplaces. While there are adults who have never lived a single day where the Internet wasn’t around and available to operate within, we are as a society still grappling with the changes that our online activities are creating in our lives.

Windows Phone 7 Will Not Steal (Much) From iOS

Techcrunch is reporting that Gardner and IDC predict a 20 percent market share for Windows Phone 7 by 2015. Said to be a conservative estimate, it appears to be based on several factors:

  • HP dropping webOS, leaving potential developers to jump ship to another mobile platform
  • Microsoft pushing new product in Europe
  • Microsoft marketing to women and youth

Let’s take a look at these. HP dumped webOS because of a change in direction by their new CEO. I won’t go into why he chose to do this, but yes webOS is dead, or at least its twitching body is soon to be laid to rest, barring a last minute rescue at any rate. But how popular was webOS really? Wonderful as it may be, webOS never gained much traction in terms of actual rubber-to-road users. And like it or not, no matter how good your platform is, developers go where the users are. Where are the users? Not on webOS. So how much of a bump would Microsoft get if every single webOS developer suddenly migrated to developing for the platform? Not much. And that bump would be made smaller by the fact that developers on marginal platforms tend to cross develop to multiple platforms. In other words, webOS developers are probably already developing for other platforms including Windows Phone 7. While market share is zero sum game, developer share is another thing entirely. So I wouldn’t expect many more apps to be added to the WP7 platform by webOS developers because many of them may already be there.

What about Microsoft’s sales efforts in Europe? Currently Windows Phone 7 barely registers a blip on the radar. They have their work cut out for them, and releasing “hundreds” of salesmen into the market to try to “better demonstrate the product” might not have the anticipated effect. Certainly it can’t hurt to have more professionals out pushing your product, but in reality you are far more likely to buy into a platform because of one of two reasons: you are used to it already and are simply jumping to new hardware or someone you already know and trust shows you how the new platform is better. Which is to say, word of mouth. And right now, word of mouth is working against Microsoft, not for it.

What about targeting women and young/first-time buyers? I can’t speak to how or why WP7 might appeal to women in particular, though the claim is made. Still, saying it is particularly appealing to women speaks to a relative appeal between genders. It doesn’t mean women are necessarily desiring WP7 phones more than other platform’s devices. Just that women are more likely to want a WP7 than men. Again, assuming Achim Berg’s, head of  Windows Phone marketing, assertion is true. As for first-time buyers, well, you can’t buy cool.

Am I saying iOS is unassailable? Absolutely not. Look at Android. It has its problems, certainly, but Android has grabbed its own slice of the pie by differentiating itself from iOS. App availability is there, although the app store experience is, in my opinion, of lower quality. But it is theoretically far more tweakable than any iOS device. What exactly will WP7 bring to the table that is going to truly mark itself as being different enough from iOS to warrant grabbing market share there? Because that is what they are going to have to do no matter who they want market share from. They need to be different enough and in a good way, if they want to be picked from a lineup that includes some of the most popular phones currently produced. Even carrier availability is disappearing as a viable means of differentiation as iOS devices are beginning to appear on more networks worldwide.

No, Windows Phone 7 is not going to steal much, if any, market share from iOS, certainly not based on the information available today from Mr. Berg and the analysts down the hall.

cocos2d: Using Tilt and Calibration/Bias

If you have created a tilt-based game using the iOS accelerometer, one of the complaints you might have heard was about having to hold the device flat, face up, in order to keep the tilt centered. You hear people explain they would like to be able to at least hold it at a slight angle and still have it be playable.

Intuitively it seems obvious that there should be a way to allow for this, and as it turns out there is. What you need to do is calibrate the device to account for the level of tilt the player is comfortable with.

First, if you have used the accelerometer (whether in cocos2D or elsewhere) you know that you provide a hook for the following method:

-(void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration;

Then, when the accelerometer sends you a message, you will receive a UIAcceleration object which has four properties (assuming the device is lying flat on its back):

x – acceleration along the x-axis or side to side
y – acceleration along the y-axis or top to bottom
z – acceleration along the z-axis or up and down
timestamp – a precise time of when the acceleration event occurred

The wording might suggest that the acceleration implies movement, but instead think of it as how much tug there is on an invisible rubber ball at the center of the phone, based on gravity as well as movement. In that way, as you tilt your phone, the x, y and z values will change depending on which direction the ball would try to roll.

The values are bound to -1 to 1. So if the phone is tilted so that it is sitting vertically, the y value would be -1, and x and z would be 0 or neutral. Now tilt it to the left a bit and x starts going negative, with y increasing as you aren’t tilting it straight down anymore. Eventually, once it is entirely horizontal, x is -1 and now y and z are 0.

Well, we want to recalibrate the tilt. Imagine that you’ve tilted your phone down a little so that y is at -0.5. Suppose that’s the position your player wants to play in so that in that tilt their little avatar stands stock still, center screen. What we want is for the tilting to be remapped. Tilting back up to sitting flat on the back should now provide something akin to a tilt value of .5 and achieving tilt values of negative values should require tilting further down than the new neutral position our player picked.

You probably see where we’re going with this. We’re going to let our player tilt their phone and then tell us when it’s at the new neutral position. We’re then going to record the current amount of tilt (i.e. acceleration) and from then on subtract that tilt from acceleration values when calculating new positions.

One more thing though; we need to provide a calibration screen for our player. But once we have recorded the bias, it seems a bit silly to have to duplicate a bunch of code in order to build a new scene to utilize the tilt. What I’m going to show you are two (well, technically three) classes which you can customize for your own use but which might prove useful in putting a calibration screen in your own tilt based game.

Disclaimer: One of the classes, AcceleratableLayer, is based upon the GameScene class in the DoodleDrop example created by Steffen Itterheim and published in his book “Learn iPhone and iPad Cocos2D Game Development”. Just as he allowed his code to be built upon with no strings attached, so do I for the purposes of this tutorial.

To begin with, let’s look at the interface declaration for AcceleratableLayer:

@interface AcceleratableLayer : CCLayer {
    float biasX;
    float biasY;
    float lastAccelX;
    float lastAccelY;
    CGPoint playerVelocity;
    BOOL adjustForBias;
    float pctSlow;
}
@property (nonatomic) float biasX;
@property (nonatomic) float biasY;
@property (nonatomic) BOOL adjustForBias;
@property (nonatomic) float pctSlow;
-(CGPoint) adjustPositionByVelocity:(CGPoint)oldpos;
-(CGRect) allowableMovementArea;
+(float) biasX;
+(float) biasY;
+(void) setBiasX:(float)x;
+(void) setBiasY:(float)y;
@end

The biasX and biasY properties are what you expect. We could also do a Z bias easily enough but for cocos2D, we’re only doing the first two D’s 😉 These properties can be used to retrieve or set the bias.

The adjustForBias property is used to determine whether we want to turn off our tilt adjustment without actually zeroing out our stored bias.

The adjustPositionByVelocity function tells us the new position based on a combination of the old position, the amount of tilt, the recorded speed and bias adjustments.

The allowableMovementArea function will be used to fence in our movement.

The static bias methods are used to actually store and retrieve the bias values into and out of the NSUserDefaults.

Now let’s take a look at the AcceleratableLayer implementation:

#import "AcceleratableLayer.h"
// You can alter this to prevent someone from calibrating for too much tilt
#define MAX_ACCEL_BIAS (0.5f)
#pragma mark AcceleratableLayer
@implementation AcceleratableLayer
@synthesize biasX, biasY, adjustForBias;
static NSString* NSD_BIASX = @"biasX";
static NSString* NSD_BIASY = @"biasY";
+(float) biasX
{
    return [[NSUserDefaults standardUserDefaults] floatForKey:NSD_BIASX];
}
+(float) biasY
{
    return [[NSUserDefaults standardUserDefaults] floatForKey:NSD_BIASY];
}
+(void) setBiasX:(float)x
{
    [[NSUserDefaults standardUserDefaults] setFloat:x forKey:NSD_BIASX];
    [[NSUserDefaults standardUserDefaults] synchronize];
}
+(void) setBiasY:(float)y
{
    [[NSUserDefaults standardUserDefaults] setFloat:y forKey:NSD_BIASY];
    [[NSUserDefaults standardUserDefaults] synchronize];
}
-(id) init
{
    if ((self = [super init]))
    {
        biasX = [AcceleratableLayer biasX];
        biasY = [AcceleratableLayer biasY];
        self.adjustForBias = YES;
    }
    return self;
}
// We will require a subclass
-(CGRect) allowableMovementArea
{
    [NSException exceptionWithName:@"MethodNotOverridden" reason:@"Must override this method" userInfo:nil];
    return CGRectZero;
}
-(void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration
{
    // used for calibration
    lastAccelX = acceleration.x;
    lastAccelY = acceleration.y;
    lastAccelX = fmaxf(fminf(lastAccelX,MAX_ACCEL_BIAS),-MAX_ACCEL_BIAS);
    lastAccelY = fmaxf(fminf(lastAccelY,MAX_ACCEL_BIAS),-MAX_ACCEL_BIAS);
	// These three values control how the player is moved. I call such values "design parameters" as they
	// need to be tweaked a lot and are critical for the game to "feel right".
	// Sometimes, like in the case with deceleration and sensitivity, such values can affect one another.
	// For example if you increase deceleration, the velocity will reach maxSpeed faster while the effect
	// of sensitivity is reduced.
	// this controls how quickly the velocity decelerates (lower = quicker to change direction)
	float deceleration = 0.4f;
	// this determines how sensitive the accelerometer reacts (higher = more sensitive)
	float sensitivity = 6.0f;
	// how fast the velocity can be at most
	float maxVelocity = 10.0f;
	// adjust velocity based on current accelerometer acceleration (adjusting for bias)
    if (adjustForBias)
    {
        playerVelocity.x = playerVelocity.x * deceleration + (acceleration.x-biasX) * sensitivity;
        playerVelocity.y = playerVelocity.y * deceleration + (acceleration.y-biasY) * sensitivity;
    }
    else
    {
        playerVelocity.x = playerVelocity.x * deceleration + acceleration.x * sensitivity;
        playerVelocity.y = playerVelocity.y * deceleration + acceleration.y * sensitivity;
    }
    // we must limit the maximum velocity of the player sprite, in both directions (positive & negative values)
    playerVelocity.x = fmaxf(fminf(playerVelocity.x,maxVelocity),-maxVelocity);
    playerVelocity.y = fmaxf(fminf(playerVelocity.y,maxVelocity),-maxVelocity);
}
-(CGPoint) adjustPositionByVelocity:(CGPoint)oldpos
{
    CGPoint pos = oldpos;
	pos.x += playerVelocity.x;
    pos.y += playerVelocity.y;
	// Alternatively you could re-write the above 3 lines as follows. I find the above more readable however.
	// player.position = CGPointMake(player.position.x + playerVelocity.x, player.position.y);
	// The seemingly obvious alternative won't work in Objective-C! It'll give you the following error.
	// ERROR: lvalue required as left operand of assignment
	// player.position.x += playerVelocity.x;
	// The Player should also be stopped from going outside the allowed area
    CGRect allowedRect = [self allowableMovementArea];
	// the left/right border check is performed against half the player image's size so that the sides of the actual
	// sprite are blocked from going outside the screen because the player sprite's position is at the center of the image
	if (pos.x < allowedRect.origin.x) 	{ 		pos.x = allowedRect.origin.x;          		// also set velocity to zero because the player is still accelerating towards the border 		playerVelocity.x = 0; 	} 	else if (pos.x > (allowedRect.origin.x + allowedRect.size.width))
	{
		pos.x = allowedRect.origin.x + allowedRect.size.width;
		// also set velocity to zero because the player is still accelerating towards the border
		playerVelocity.x = 0;
	}
    if (pos.y < allowedRect.origin.y)     {         pos.y = allowedRect.origin.y;                  playerVelocity.y = 0;     }     else if (pos.y > (allowedRect.origin.y + allowedRect.size.height))
    {
        pos.y = allowedRect.origin.y + allowedRect.size.height;
        playerVelocity.y = 0;
    }
    return pos;
}
@end

That’s a bit of meat with those potatoes. Let’s chop it up a bit.

First, I want to point out the MAX_ACCEL_BIAS macro which is set to 0.5f. What this does for us is, as the name implies, lock the bias to a max of 0.5f in either direction. Consider for a moment what would happen if your user tilts their phone almost vertically. The y value will be near -1. Now you calibrate. They can’t tilt their phone any further down in order to move the avatar downward on the screen. Meanwhile, tilting forward will result in moving upward with rocket like speed. Even letting the user go this far results in a little lagginess going downward. It becomes a matter of taste as to how much bias you want to let them introduce.

Next we have the static methods to set and retrieve bias via the NSUserDefaults system. If you feel like using a different method of stashing your player’s bias settings, feel free to replace them here. The usage is pretty straightforward.

Next we have the -(id)init method where aside from the usual, we are grabbing any stored bias settings and stashing them in our local members as well as presuming we are adjusting for bias.

The next method is -(CGRect)allowableMovementArea and you will immediately notice it does nothing of any use whatsoever. In fact, it throws an NSException right off the bat and returns a CGPointZero just for good measure. What are we doing here? Unlike other languages, Objective C doesn’t have a way to ensure that a class cannot be directly instantiated. You have to simply agree to do so. Just to be a little forceful about it, if you do instantiate an AcceleratableLayer object, it’s going to blow up on you mighty quickly. This means that in order to make use of AcceleratableLayer, you must subclass it and specifically override this method with valid functionality.

So what is this method supposed to do? It is supposed to return a CGRect that describes the boundary outside of which the controlled object is not allowed to travel. This allows the later code to know when to stop increasing velocity and altering movement because you have reached a border location. There are some assumptions built in here. We use a CGRect, so with this code as is, you can’t lock the movable object into another shaped area like a triangle or circle. Additionally, the code that adjusts movement and velocity does not take into account the dimensions of the CCSprite rectangle which represents the object being moved, so your CGRect should represent the area which the center of the moved object is bound within. That is, your CCSprite will likely overlap the edge of the returned CGRect so make sure it is small enough that the sprite is not clipped in a way you don’t wish it to be. Once again, it must be stressed that you must subclass AcceleratableLayer and override this method with your own code to return your own CGRect.

This brings us to the -(void)accelerometer:(UIAccelerometer*)accelerometer didAccelerate:(UIAcceleration*)acceleration method. This is mostly identical to the original code Steffen Itterheim provided with a few alterations to make adjustments for bias. Adding code to accomodate the z value is as easy as it would appear. Note the use of the playerVelocity CGPoint value. AcceleratableLayer will maintain a constant update of the new velocity based on accelerometer callbacks without needing any further prompting.

Finally we reach -(CGPoint)adjustPositionByVelocity:(CGPoint)oldpos. This method gets called by subclasses in order to retrieve an updated position based on a combination of the previous position and the playerVelocity value being tracked. In my case, I perform this in the update:(ccTime) method which I schedule, but of course you can update this however you wish.

Okay, so that provides some core functionality, but how do we use it to actually do calibration? Glad you asked! First off, we’re going to introduce a new CCScene subclass called CalibrationScene. For those keeping score at home, this is the second class I’m going to mention to you. Ready for the interface declaration?

@interface CalibrationScene : CCScene
{
}
+(id) scene;
@end

Exciting stuff, eh? The +(id)scene method, as you expect, creates a new CalibrationScene to be pushed onto the CCDirector. That’s it.

Okay, so let’s take a look at the implementation file for this guy:

#import "CalibrationScene.h"
#import "AcceleratableLayer.h"
#pragma mark CalibrationLayer
@interface CalibrationLayer : AcceleratableLayer {
    CCSprite* testsubject;
    CGPoint cenpt;
}
@end
@implementation CalibrationLayer
- (id) init
{
    if ((self = [super init]))
    {
        // We need the accelerometer
        self.isAccelerometerEnabled = YES;
        // You can create whatever CCSprite you want
        // For the purposes of this demo, I'm creating a simple red square on the fly
        GLubyte pixels[900][4]; // 60x60 square
		int i;
		for(i = 0; i < 900; i++) {
			pixels[i][0] = 0xFF; /* Red channel */
			pixels[i][1] = 0x00; /* Blue channel */
			pixels[i][2] = 0x00; /* Green channel */
			pixels[i][3] = 0xFF; /* Alpha channel */
		}
		CCTexture2D* myTexture = [[CCTexture2D alloc] initWithData: (void*) pixels
                                          pixelFormat: kTexture2DPixelFormat_RGBA8888
                                           pixelsWide: 30
                                           pixelsHigh: 30
                                          contentSize: CGSizeMake(30,30)];
        testsubject = [CCSprite spriteWithTexture:myTexture];
        // Let's start our test subject out in the center of the screen
        CGSize winSize = [[CCDirector sharedDirector] winSize];
        cenpt = ccp(winSize.width*0.5f,winSize.height*0.5f);
        testsubject.position = cenpt;
        [self addChild:testsubject];
        // And add a simple menu so we know whether we are Done, want to Calibrate to the current tilt, or Zero things
        // back out to normal
        CCMenuItemLabel* closeMenu = [CCMenuItemLabel
                                      itemWithLabel:[CCLabelTTF labelWithString:@"Done" fontName:@"Helvetica" fontSize:12.0f]
                                      target:self
                                      selector:@selector(closeScene)];
        CCMenuItemLabel* calibrateMenu = [CCMenuItemLabel
                                          itemWithLabel:[CCLabelTTF labelWithString:@"Calibrate" fontName:@"Helvetica" fontSize:12.0f]
                                          target:self
                                          selector:@selector(calibrate)];
        CCMenuItemLabel* zeroMenu = [CCMenuItemLabel
                                          itemWithLabel:[CCLabelTTF labelWithString:@"Zero" fontName:@"Helvetica" fontSize:12.0f]
                                          target:self
                                          selector:@selector(zero)];
        CCMenu* menu = [CCMenu menuWithItems:closeMenu, calibrateMenu, zeroMenu, nil];
        [menu alignItemsHorizontallyWithPadding:50];
        menu.position = ccp(winSize.width*0.5f, 25 /* arbitrary */);
        [self addChild:menu];
        // And finally, schedule an update callback
        [self scheduleUpdate];
    }
    return self;
}
// Remember, ANY class inheriting from AcceleratableLayer is going to have to
// override this method, which defines the allowable portion of the screen that
// a target is allowed to move into
-(CGRect) allowableMovementArea
{
	CGSize screenSize = [[CCDirector sharedDirector] winSize];
	float imageWidthHalved = [testsubject contentSize].width * 0.5f;
	float leftBorderLimit = imageWidthHalved;
	float rightBorderLimit = screenSize.width - imageWidthHalved;
    float imageHeightHalved = [testsubject contentSize].height * 0.5f;
    float topBorderLimit = screenSize.height - imageHeightHalved;
    float bottomBorderLimit = imageHeightHalved;
    return CGRectMake(leftBorderLimit, bottomBorderLimit, rightBorderLimit-leftBorderLimit, topBorderLimit-bottomBorderLimit);
}
// We just use the AcceleratableLayer method -(void)adjustPositionByVelocity: to
// set our test subject's new position
-(void) update:(ccTime)delta
{
    testsubject.position = [self adjustPositionByVelocity:testsubject.position];
}
// Only really useful if we're not the only scene in the app, which normally we won't be.
-(void) closeScene
{
    CCLOG(@"close the calibration scene");
    // Uncomment the following line ONLY if this scene is not the only remaining scene
    //[[CCDirector sharedDirector] popScene];
}
// Calibration is fairly straightforward... we adjust the bias based on how much
// we are currently tilted. Here we are saving it to NSUserDefaults via the
// AcceleratableLayer methods. We also push the test subject back to the center
// of the screen for further refinement if needed
-(void) calibrate
{
    float x = lastAccelX;
    float y = lastAccelY;
    self.biasX = x;
    self.biasY = y;
    // reposition test item to center
    testsubject.position = cenpt;
    // now save the prefs, which also sets the bias in the inputlayer if it needs it
    [AcceleratableLayer setBiasX:x];
    [AcceleratableLayer setBiasY:y];
}
// Zeroing means forcing the bias back to zero.
-(void) zero
{
    float x = 0;
    float y = 0;
    self.biasX = x;
    self.biasY = y;
    // reposition test item to center
    testsubject.position = cenpt;
    // now save the prefs, which also sets the bias in the inputlayer if it needs it
    [AcceleratableLayer setBiasX:x];
    [AcceleratableLayer setBiasY:y];
}
@end
#pragma mark CalibrationScene
@implementation CalibrationScene
+(id) scene
{
    return [[[self alloc] init] autorelease];
}
- (id)init
{
    self = [super init];
    if (self) {
        // Initialization code here.
        [self addChild:[CalibrationLayer node]];
    }
    return self;
}
- (void)dealloc
{
    [super dealloc];
}
@end

Whoa! That’s a lot more than what you would expect from that tiny little interface right? Well, that’s because there’s actually an extra class defined and implemented in there, CalibrationLayer. That would be the third class I mentioned.

But let’s start by pointing out that right there at the bottom of the implementation is the full implementation of CalibrationScene. And all it does is create and add as a child one CalibrationLayer. So in reality, the meat here is all in CalibrationLayer. Let’s dig in!

To start with, CalibrationLayer has two members, a CCSprite we lovingly call testsubject, and CGPoint called cenpt. testsubject is the sprite that we will move around via tilt. cenpt is just a stashed copy of the center of the screen. No surprises here.

Opening up -(id)init we start off by enabling the accelerometer. Note that we actually didn’t do that in the AcceleratableLayer init method. We probably could but it’s fine either way. Just remember to enable it somewhere.

The next bit of code may look odd and it is the result of my wanting to not have to include an image in this tutorial or the project which will be available for download. I’m going to create a 30×30 red square as a sprite. I’ll repeat the relevant code below:

        // You can create whatever CCSprite you want
        // For the purposes of this demo, I'm creating a simple red square on the fly
        GLubyte pixels[900][4]; // 60x60 square
		int i;
		for(i = 0; i < 900; i++) {
			pixels[i][0] = 0xFF; /* Red channel */
			pixels[i][1] = 0x00; /* Blue channel */
			pixels[i][2] = 0x00; /* Green channel */
			pixels[i][3] = 0xFF; /* Alpha channel */
		}
		CCTexture2D* myTexture = [[CCTexture2D alloc] initWithData: (void*) pixels
                                          pixelFormat: kTexture2DPixelFormat_RGBA8888
                                           pixelsWide: 30
                                           pixelsHigh: 30
                                          contentSize: CGSizeMake(30,30)];
        testsubject = [CCSprite spriteWithTexture:myTexture];

This isn’t really the most pertinent code for this tutorial but I did want to make mention of it. Essentially, we construct the bytes to represent a red 30×30 bitmap with alpha channel. Then we pass those bytes into a texture object. Finally we pass that texture object in to create a new sprite on the fly.

Moving on, we calculate the center point of the screen, stash the value and put the testsubject there by setting its position attribute. We also add the testsubject to the layer.

Next we set up a menu at the bottom of the screen to allow us to either say we are done calibrating, we want to accept the current calibration and store it, or we want to zero out the calibration and start over from scratch.

Finally we call scheduleUpdate to we are getting our update method called each frame.

Next up is the implementation of our -(CGRect)allowableMovementArea method. Remember how I said that subclassing was required as was overriding of this method? Well, here’s a sample implementation meant to lock you to .. anywhere on the screen. But note how I’m taking into account the size of the tracked sprite and using that to help define the CGRect.

The update method is pretty straight forward. We set the testsubject.position attribute to the result of calling adjustPositionByVelocity with the original testsubject.position. This calculates the new position based on how we’ve been tilting the device up til now.

The closeScene method is not terribly interesting. It gets invoked if the Done menu item is tapped. For now it doesn’t do anything because popping the only scene tends to provide for a dull experience. You could uncomment that line if it’s been pushed on top of another scene though and you would get the expected result.

The calibrate method is run when the user taps the Calibrate menu item. It grabs the previous acceleration (i.e. tilt) for x and y and pushes those values into the local bias members, sends the testsubject back to the center of the screen for possible further calibration, and then sets the user defaults with the new bias as well.

The zero method is identical to the calibrate method, but gets called when the Zero menu item is tapped and is hardcoded to storing 0 for the bias values. This represents restoring the bias back to the neutral state.

And that is it. Drop in the interface and implementation files for these classes and you could have your very own calibration scene along with a subclassable layer class that will provide this acceleration logic for you. Additionally by doing it this way, you guarantee that any tweaks you make in AcceleratableLayer to adjust your acceleration and deceleration curves, max velocities, max biases and other items will automatically be applied to any layer which subclasses it, making it easy to perform these adjustments in one place.

Have fun programming!

Also, if you’re interested, you can download the full project here: CalibrationDemo

Re: Terminal Here Plugin

I still receive traffic looking for some of my older projects including the Terminal Here Plugin. For those who don’t know, Terminal Here Plugin was a contextual menu plugin for the Finder which allowed you to control/right-click a folder in Finder and use that to open a Terminal window in that location.

As of Snow Leopard, OS X 10.6, third party contextual menu plugins were essentially disabled for the Finder. As a result, even my poor little Terminal Here Plugin was left broken and wasted. But not to worry! If you have upgrade to Lion, it turns out that Apple has baked this feature in as a service.

Go to Preferences and open the Keyboard preferences. Choose the Keyboard Shortcuts tab and on the left choose Services. On the right, scroll down a bit to the Files and Folders section and look for New Terminal at Folder. Turn this option on. Optionally you may choose to turn on New Terminal Tab at Folder. You can also double click in the white space to the right of either option and choose a keyboard shortcut which will also perform the service. Once your options are set and items are checked, go to a Finder window and right click on a folder. Near the bottom you should see a Services menu item and from there should see either New Terminal at Folder or New Terminal Tab at Folder.

All done!

Centripetal: Free for a Day!

PNG is making Centripetal free for a day!

cocos2d Health Bar

Would you like to create a cool health meter complete with a grilled effect and a background gradient? Something that looks a little like this:

Sample colored grilled health meter

I’m going to show you how to set this up in your own game. I’m assuming you are using cocos2d as your graphics framework. First, design your grill. This is going to be an image with the same background color as where you intend to put your health meter, but with transparent chunks removed to create the grill effect. Mine looks like this:

Note that it is 200 pixels wide by 40 pixels tall. More importantly because it uses transparency, I have to use an image format that explicitly supports transparency. In my case I used the PNG format, though GIF would also have worked. JPG, since it does not support transparency, would not work in this case.

Next, add the following member to the layer or scene to which you are adding your health meter:

    CCLayerColor* healthHiderLayer;

You’re going to need that later. Next, in your setup code for the same class (typically in your init() method), add the following:

        float healthWidth = 200;
        float healthHeight = 39;
        CCLayerGradient* baseHealthLayer = [CCLayerGradient layerWithColor:ccc4(255, 0, 0, 255)
               fadingTo:ccc4(0, 255, 0, 255) alongVector:ccp(1.0f,0.0f)];
        baseHealthLayer.contentSize = CGSizeMake(healthWidth, healthHeight);
        [self addChild baseHealthLayer];
        healthHiderLayer = [CCLayerColor layerWithColor:ccc4(0, 0, 0, 255) width:healthWidth
               height:healthHeight];
        healthHiderLayer.position = CGPointZero;//ccp(shieldWidth,shieldHeight);
        healthHiderLayer.anchorPoint = ccp(1.0f,0.0f);
        [baseHealthLayer addChild:healthHiderLayer z:3];
        CCSprite* healthGrill = [CCSprite spriteWithFileName:@"healthGrill.png"];
        healthGrill.position = CGPointZero;
        healthGrill.anchorPoint = CGPointZero;
        [baseHealthLayer addChild:healthGrill z:4];

Let’s break that down. I create two local variables, healthWidth and healthHeight, which are set to the width and one less than the height of the health grill image. I then create a gradient layer with those dimensions. I then add baseHealthLayer to whatever CCNode that I’m placing the health meter into. The rest of the health meter components are children of baseHealthLayer.

I then create a solid black layer on the fly which will be used to conceal the gradient layer we just created. It’s a little bigger than the gradient layer in order to cover it without artifacts. Note that we anchor the healthHiderLayer on it’s right end. That will be important later.

Then, we grab our grill and put it into place. Note also that we set up our z-order here with the grill on top, the concealing layer below it and the gradient on bottom.

The next bit of code should be a method defined in the object we put the health meter into:

-(void) setHealthPercent:(float)pct
{
    healthHiderLayer.scaleX = 1.0f - pct;
}

This gives you the ability to set your health to a specific percentage. This in turn alters the scaleX property of the healthHiderLayer. Because we anchor it on its right end, that means it adjusts its length as extended from the right end of the health bar in general. As the health percent decreases, the scaling effect increases, and the hider hides more and more of the gradient. The grill is just there as additional eye candy but can be made to look however you want the individual health bars to look.

Malware .. Plague of the Internet

So you think you may have become infected with malware (that is, a virus, a trojan, a keylogger, a rootkit or any other number of bits of malevolent software). First off, realize that many types of malware can be cleanly removed. The counterpoint to that is that not only are other types extremely hard to get rid of, they can even confound anti-malware kits you might have installed or are considering installing to clean things up. Sometimes the safest approach might even be to physically remove your infected hard drive and connect it as a passive drive on another clean machine with cleanup tools which can then work with the infected drive without actually fighting against the malware installed on it.

Here are some sites with some apps that can help:

http://www.gmer.net – this site contains three tools:

  • GMER itself does very thorough scans and can attempt to clean some types of malware (they recommend that you rename it to something random before running so as to disallow malware on your system from trying to halt its execution)
  • catchme, which tries to detect whether you have a rootkit running
  • mbr, which tries to detect whether you have a MBR (Master Boot Record) infection, one of the thorniest types of malware to clean because you can’t (normally) clean it while booted up from the infected drive

http://support.kaspersky.com/viruses/solutions?qid=208280684 – this site links to Kaspersky’s TDSSKiller application which can disinfect certain rootkits

http://www.bleepingcomputer.com/download/anti-virus/combofix – this site links to ComboFix, an application that is updated regularly to find and eliminate a variety of malware infections. The warnings indicate you should only run it when you are told to do so by the helpers at bleepingcomputer.com so take it with a grain of salt

http://www.malwarebytes.org/ – this site links to MalwareBytes’ Anti-Malware (aka MBAM) which with the free version can do after-the-infection cleanup in some cases, but they also have a paid version ($25/yr) which tries to actively prevent infections.

http://download.bleepingcomputer.com/grinler/unhide.exe – this application is used to unhide your start menu and folders after certain applications hide them in an attempt to make you think your machine was damaged and that the malware can fix it if you provide a credit card number.

This list is by no means exhaustive. There are other tools available as well. More importantly these tools are explicitly NOT antivirus tools along the lines of Symantec, Security Essentials, Sophos, Avast and others. With the exception of MBAM, they don’t have a resident mode to monitor your machine to try to prevent outbreaks or instantly clean up infections in real time. They are mostly intended to clean things up when requested. And nothing replaces contacting an actual computer support technician to have a look. Additionally these tools are typically updated frequently to respond to the most recent outbreaks. This means you shouldn’t just download a copy and expect it to be equally effective six months down the road.