Katy ISD Air Quality Alert

Ordinarily, I reserve my blog for tech related content, but I just got this from KISD and wanted to share for those who may have kids in the district but don’t receive these messages.

Updated @ 12pm:

This morning, the concentration of smoke from wildfires to our north has significantly increased and is not expected to subside during the day. Therefore, the district is taking the following steps to maintain the health and wellbeing of students and staff:

There will be no outdoor PE or recess activities for the remainder of the day.
There will be no outdoor athletic or fine arts practices. Athletic and fine arts after school practices will be held indoors if possible. Students should check with their coaches/sponsors for exact details as to the status/location of their practice.
Campuses will work to load students on buses quickly to minimize time outdoors.
District officials will continue to monitor the situation closely for possible impact on operations for Friday. By noon Friday, district officials will make an announcement as to what, if any impact, the air quality may have on Friday night athletic events.

Please continue to monitor the Katy ISD website for more information about this situation as it becomes available.


The Katy ISD Office of Emergency Management has been monitoring the situation regarding the wildfires north of our area. These fires, located in the Montgomery, Grimes and Waller County area continue to burn and produce smoke that continues to cover the Katy area.

This morning, the concentration of smoke in our area has significantly increased. Therefore, the district is limiting outdoor activity for all students until noon. In addition, we are closing the outside air intakes on our HVAC systems to help maintain the air quality in the buildings. District officials will continue to monitor the situation and will make a decision later this morning as to whether or not to restrict after school outdoor activities, such as athletic and band practices.

Teachers and school nurses will continue to monitor those students who are asthmatic and those with other respiratory conditions. According to the Texas Department of State Health Services, “Common symptoms of smoke exposure include coughing, scratchy throat, irritated sinuses, shortness of breath, chest pain, headaches, stinging eyes and runny nose.” If a teacher sees a student with an increase in these symptoms, the student will be directed to the school nurse.

Please continue to monitor the Katy ISD website for more information about this situation as it becomes available.


Motorola Facebook Phone?

I have to say that a Motorola Facebook phone (as reported by Techcrunch) was not something I expected to see. Given Google’s recent purchase of Motorola Mobility and their launch of Google+ as competitor to Facebook, as well as the ongoing battle for ad revenue and I don’t see a bright future for this device. Given the development cycle of these sorts of devices, it was likely in the pipeline well before Google’s purchase which explains why it’s even seeing what little light of day it is getting at this point. But I foresee a minimal effort to push these devices assuming they remain as currently designed.


Google, Red Handed and Red Faced

Now making the rounds are comments surrounding the internal Google documents revealed in the Oracle v Google case. The bits that seem to have everyone’s interest aroused concern two things. The pertinent bullet points from the discovered document are as follows:

  • Do not develop in the open. Instead, make source code available after innovation is complete
  • Lead device concept: Give early access to the software to partners who build and distribute devices to our specification (ie, Motorola and Verizon). They get a non-contractual time to market advantage and in return they align to our standard.

A lot of time and attention is being given to these statements, making Google out to be hypocritical or as having less than savory business practices. The fact is that there is absolutely nothing wrong with either of these practices.

The first bullet point, concerning not developing in the open, is innocuous. There’s nothing evil going on here. There’s nothing about open source or openness in general that requires your code repository be visible 24×7 for all the world to see. If you want to stay in the coding cave and do your development (or innovate as Google puts it), only releasing the fruits of your labor when they are ripe and bursting with innovative goodness, that is your prerogative. I suspect it’s the word choice that has folks up in arms here but conceptually this is the same thing as the guy working on a new device driver to tack onto the Linux kernel, not releasing it until it’s feature complete. Really, it’s just not a big deal.

The second bullet point, concerning early access to partners who abide by Google’s standards, is a bit more interesting, but nonetheless does not rise to the level of being ‘wrong’. Perhaps a bit embarrassing and maybe even damaging now that it’s out in the open, but not wrong. If Google is developing a codebase and wants partners to adhere to their standards as a precondition of early access, isn’t that their right? Is there anything indicating they absolutely must provide access to everyone, equally? Most open source licenses simply indicate what is involved in the relationship between the developer and the recipient of the code. It has nothing to do with any other relationships. Should Google have done this? Probably not from where they’re sitting now, but at the time it was a calculated risk. Were they able to offer something in exchange for partners abiding by the vision Google had for the Android platform? What were the chances of it becoming public knowledge? How damaging would it be? Unfortunately for Google they may be about to find out. For my part? *meh*


HP’s Future Cloudy

HP has announced a ‘private Cloud beta’ to introduce developers to their new HP Cloud Services. This is broken into two actual named services, HP Cloud Compute and HP Cloud Object Storage. This breaks the cloud functionality up into the two traditional bits of cloud computing: putting stuff in the cloud (storage) and doing stuff in the cloud (computing). I just wonder if it’s the right move, at the right time, done right.

With this announcement, HP is further declaring their shift to software services under CEO Apotheker. Moreover, these services are targeted squarely at developers and companies to build upon and deliver their own products with HP as the underpinning, again a shift away from HP as the provider of an end product for consumers. This isn’t a bad thing, but it presents a problem: product differentiation. There are already cloud storage solutions available, Amazon S3 for example, though Google Storage is in Lab state and other competitors are around though perhaps not as well known. Why is HP entering what looks to be a pretty crowded field? If they have a long term plan, given the turmoil they’ve been suffering through, wouldn’t it be a good idea to be as transparent as possible right about now?

If HP Cloud Object Storage and HP Cloud Compute are meant to integrate tightly, it makes a certain amount of sense. If you truly have a need for heavy computing, it’s reasonable to levy HP’s servers to get the job done perhaps at a fraction of the cost it would take to buy the hardware yourself. And their storage is right there waiting for you to use too. But one gets the impression that Object Storage is intended to be leveraged as a separate entity altogether. And as evidenced by yesterday’s vanishing act with Google Docs, albeit temporary, the further your data is away from you, the more easily it can become out of reach when you need it the most.

Still, kudos to HP for announcing something they are actually going to offer. It must be nice not to have to constantly remind folks about the products they are killing off.


Google’s Cloud Evaporating

I’ve written before, elsewhere, about cloud computing as the latest trend (though that’s not to say it’s new, just that it is trending.. again). At the time I laid out pros and cons from the point of view of putting the entire computing experience into the cloud. But of course, that’s only one way to do it. Currently there are two major companies who are pushing their own views of how cloud computing should be done, Google and Apple. And Google just stumbled.

Google suffered an outage today with their Google Docs service. Google Docs, if you are not familiar with it, allows one to import, create, edit and share documents using only your Google account and a modern browser. These documents are Office-like, with the ability to import Word, Excel and Powerpoint documents as well as to create native Google Docs documents too. All of the storage is tucked away on Google’s servers. All you need to be able to do is launch a browser and direct it to Google and you’re good to go. Equally convenient is the ability to share these documents with other Google users making them immediately available for viewing or even editing, including collaborative editing should you so choose. That is, convenient until it stops working.

Apple on the other hand is nearing the release of the much anticipated iCloud service, enabling the cross-device sharing of documents and settings between thick client apps on a per user basis. As information is altered, it is marked for synchronization. Presumably if the service is unavailable, the synchronization step is simply delayed until the service is available once more. This could be because the network connectivity has dropped or because Apple’s servers are dead. It doesn’t really matter. The cloud connection becomes a mere background task while for the end user life goes on as usual. And that’s the way cloud computing should be.

Google’s entire platform is centered around it’s own services run on its own servers. Apple is about their hardware. The services are an aside, or perhaps a funnel, showing potential buyers the extra goodies they get by joining the Apple camp. As a result, Apple doesn’t need to create a web enabled version of iWorks or iLife that works in a browser. They don’t want to. They want apps that run on your iron, in your own home or office. Namely, the iron you bought in the form of your MacBook or iMac or Mac Pro. Google, on the other hand, is platform agnostic. You could be using a Dell, an Asus, an HP (well, for a little while longer anyway). It only matters that you are using their services.

I should correct myself. Google does actually have hardware for sale… the Chromebook. Running their OS, targeted at their services and software. So in fact, insofar as Google is playing in the hardware space, they are actual working the exact reverse route as Apple, using hardware to sell their services. As their flagship hardware product, I don’t expect them to drop it, but I also don’t expect it to take off. Especially with the possibility that one little network outage could leave you unable to work with any of your documents.

Which brings us back to today’s outage. Google hasn’t misstepped very often, but they’ve double down on software as a service and full commitment to cloud computing, pushing everything off of the user’s PC and into the cloud. As a result, if they lose this bet, it’s going to hurt very badly. And that doesn’t even mention Microsoft’s burgeoning efforts in this space. Google is taking their stand in the cloud but if they’re not careful, they’ll find themselves taking a big fall.


What Went Wrong At Yahoo?

Yahoo is hurting. It’s been hurting for some time now. It’s growth has been stunted, in fact seeing ad revenue shrink, particularly in the face of fierce competition from Google and subsequently Facebook. According to the Wall Street Journal, an insider source reports the company is willing to consider selling to the right bidder. Taken with the rest of the financial news about the company’s woes, it signals a pretty sharp descent from what were once lofty heights. What went wrong?

It is easy to say “Google and Facebook” and leave it at that, but that leaves a lot unsaid. Google, for instance, nominally offered precisely what Yahoo was offering. As has been stated repeatedly of late, the real client of Google (as well as Yahoo) is advertisers. Their real product for sale is the attention of the users of their various services. But in order to create that product, they still have to provide something attractive for those visitors. In other words, a better experience. That is where Yahoo failed against Google. Whether Google has strayed from it’s “Don’t Be Evil” mantra, it started by focusing on providing quality search results for those using their search engine. The algorithm has been tinkered with and improved over time, but even from the start, users were treated to a very simple and very effective interface. One text field, one button. This was in contradiction to Yahoo’s interface which enticed you to drill down through their heirarchy of categories. Of course Yahoo also had a search field, but on top of the relative clutter of their home page, they had yet to really provide a similarly effective search experience. As a result, the core reason people visited either site was better served by going to Google rather than to Yahoo. Yahoo began providing peripheral services to their users before Google did, like Yahoo Mail, still one of the most widely used web-based email services. But monetizing those services was still problematic.

So what of Facebook? Where Google stepped in and did Yahoo’s services but better, Facebook offered a completely different service but one that ultimately competed for the same advertisers. Yahoo has had social services and had them in place well before Facebook. But again, they weren’t capitalized on and thus lost ground. Moreover, services like Yahoo Groups, Yahoo Personals and Yahoo 360 were all too disparate and never provided a single cohesive experience for the end user. Facebook provided all of the social aspects in one location and has continued to add to them. More importantly, Facebook provided third party developers the opportunity to tap into their ecosystem and make money. This not only increased direct revenue to Facebook, it also allowed additional compelling content to come into Facebook for visitors without Facebook having to lift a finger to create it. Yahoo, in contrast, focused primarily on user created content and again didn’t seem to effectively make what services they did have into revenue centers.

What Yahoo lacked was a focused vision of not only where to go but how to get there. It seems like any time a new feature was to be added, it would be bolted on rather than integrated. And with the revolving door policy that is developing in their top spot and now the leadership-by-committee approach that also looks to be developing, Yahoo doesn’t look to be breaking out of their slump anytime soon. Still, they are big. They do get traffic. They may not be top dog, but they aren’t to be ignored. If the right person is brought in with the right mandate, a lot could be done to turn the company’s fortunes around.

I’m just not holding my breath.


Set Up Constant Speed in box2d

This brief tutorial will show you how to set up dynamic objects with a constant speed in box2d. It assumes you have a basic understanding of how to set up a box2d project with cocos2d.

As you are no doubt aware, the box2d physics engine is a wonderful tool for creating a virtual world filled with objects that interact in a way analagous to the real world. In this world gravity, friction (including rotational friction), inertia and momentum are all simulated.

But what of the concept of cruise control? You know, you set a speed and then the target continues to try to match that speed. You might control the vector direction in some other method, but the speed is intended to remain constant. How do you do that? That’s what we’re going to take a look out now.

Conceptually, what we want to do is measure our current speed on each update cycle and then fire an impulse of the appropriate size and direction in order to nudge us up to (or down to) speed. What we specifically do not want to do is simply call SetLinearVelocity(). Why not, you may ask. The problem is that doing so essentially tells the box2d engine “Hey, ignore whatever you *think* should be happening to that body. Here’s the actual velocity.” Instead, what we want to do is tell the box2d engine, “See that body over there? I want you to add this new impulse to it and factor that in along with everything else.” This lets the box2d engine take the entire model and any ongoing interactions into account rather than dropping everything and running with the new values.

Because I’m taking code out of my game Centripetal, I don’t have a full project with a demo set up to show you what I’m talking about. But I will pull out the pertinent bits and provide some illumination on what I’m doing.

Before we get started, remember that box2d is a physics simulator only. It does not display graphics. b2Body objects do have a UserData attribute which is a void* and which can therefore store a pointer to, for example, a CCSprite. Likewise, you can also create a CCSprite subclass which has a b2Body* member and thus the two could refer to one another. I will leave the pointer management concerns to you based on your own implementation.

In my case, I have a CCNode subclass which has member pointers to both the b2World and CCSprite objects.

@interface BodyNode : CCNode
	b2Body* body;
	CCSprite* sprite;
@property (readonly, nonatomic) b2Body* body;
@property (readonly, nonatomic) CCSprite* sprite;

There’s more to it, but that’s enough to get us going here. Now let’s focus on our CruiseControl object. We’re going to subclass BodyNode for this and add a little to it:

@interface CruiseControl : BodyNode
	float speed;
@property (nonatomic) float speed;

We’ve got a BodyNode subclass to which we have added a speed member. Why speed? Why not a b2Vec? We don’t want a steady direction, just a steady rate of movement. We want the box2d engine to bounce us around and change our direction but we want to know just how fast we should be moving and try to nudge ourselves just enough, but in the current direction, in order to achieve that. Let’s so how we do it:

-(id) init
	if ((self = [super init]))
		[self scheduleUpdate];
	return self;

Okay, the first thing you’ll see is that, among other things in init, I’m scheduling an update callback. This doesn’t have to happen in init, but it’s a convenient place to do so. Note that this is a subclass of BodyNode which has a pointer to the CCSprite we will ultimately be moving around in cocos2d. The update will occur on our BodyNode subclass and not directly on the CCSprite we contain.

-(void) update:(ccTime)delta
	b2Vec2 curvel = body->GetLinearVelocity();
	if (curvel.Length() < self.speed || curvel.Length() > self.speed + 0.25f)
		float curspeed = curvel.Normalize();
		float velChange = self.speed - curspeed;
		float impulse = body->GetMass() * velChange;
		curvel *= impulse;
		body->ApplyLinearImpulse(curvel, body->GetPosition());

There may be more going on inside your update method (there is in mine in fact), but what you see here is the nuts and bolts of the cruise control concept. We first retrieve the current velocity which is a vector with scale equal to current speed. We check that speed against our desired speed. If it is too low or if it is too high, we want to apply an impulse.

Note that I have a bit of a fudge factor. You are dealing with floating point numbers and the usual lack of precision that entails. You can play with your fudge factor as you like. Maybe you’re okay with being a little slower but no faster. Maybe you don’t mind a little wiggle room in either direction. You can alter that to your heart’s content.

So if we need to apply an impulse, we first normalize our current vector of movement. That gives us the direction with a factor of 1.0f which conveniently allows us to reuse the vector by multiplying it by our speed to get what we need. We calculate our speed by simply subtracing the current speed from our desired speed. Note that if we are moving too fast, this gives us a negative value. This is important in the next step as we multiply by the body’s mass to get the needed scale to apply to the vector to give us our new direction. In the case of excessive speed, this becomes a negative value which reverses the vector for purposes of applying the impulse. Finally, we apply the impulse to our body at its location, allowing the physics engine to nudge us enough to get us back to the correct level of speed.

Naturally, you can play around with this as much as you like. You can alter the scheduled update to call whichever method you prefer. You can alter the frequency of the scheduled callback too. Or if you prefer, you could conceivably eliminate the update callback on your BodyNode subclass by using the box2d processing loop to watch for your CruiseControl object and perform your check at that time. Regardless, you now have a simple method of setting up cruise control for your box2d objects.

An additional note concerning gravity: When developing Centripetal, I set the simulator up with no gravity as I was simulating a top down view of a frictionless surface. I didn’t need gravity. The problem you will face when adding gravity is that if the gravity is intense enough compared to your desired speed, even with constant impulses to push the object along it won’t be enough to counteract the gravitational pull between steps. So your object might end up slinking around on the bottom of your simulation view rather than moving about freely. If the gravity is low enough relative to the desired speed, then the steady stream of impulses coming each step should be enough to let you fly.


iPad Competition That May Stand a Chance

Techcrunch got their hands on a test version of the new Kindle and based on their report, it seems to give a glimpse of how a worthy competitor to the iPad might be fashioned.

Taken as a whole, it’s like most any other Android tablet. The form factor is an improvement and we’ll have to wait to see what the battery life is like. Let’s just say that much of the raw capability of the device will remain the same as any other Android tablet on offer. So what makes up the difference? Spit and polish plus price point.

First, consider Mac hardware in general, laptops and desktops. Even displays. They are made primarily of common components that any manufacturer can get ahold of. There’s no secrets here. That’s not to say there aren’t some serious hardware design chops being put to work to make that hardware hum, but in terms of the overall capabilities of the units in question, you can find similar quality from many other vendors if you’re willing to look for it. It’s when you boot it up that you see a huge difference. OS X has Apple stamped all over it. It’s a very consistent experience and one that Apple takes great pains to maintain.

Likewise Amazon is putting their stamp on the Android tablet experience with this newest Kindle. You’ll apparently be getting their look and feel, their color scheme (by default anyway) as well as their app store (again, by default). They’ve even taken their version of Android and run with rather than trying to stay up with the latest updates from Google. Essentially it appears they have forked their own copy of Android, tweaking it to maximize its effectiveness on their own hardware. That’s well and good, but lots of vendors do this. Or at least put their own mark on it. The difference here is going to be in execution and while it remains to be seen how effective Amazon can really be at customizing the Android UI, they have the advantage that their device is being sold to customers with the express purpose of linking them to the Kindle reader and Kindle store. In essence, you’re buying the device specifically because you anticipate using it with Amazon’s services. So they will be more free to integrate their services into the end product without customers complaining that they can’t remove the apps. And that’s going to be one big difference. Other Android vendors have tended to go the same route as PC vendors have, shoveling unwanted and unneeded applications onto the device in order to push customers toward additional purchases or as part of relationships with other vendors. Here, Amazon is the only vendor in question and the customers are buying the device because they want Amazon’s services.

The other thing that will help this be more competitive with the iPad is the price point. It’s low. It’s not HP TouchPad low, but at $299 it’s below even entry level iPad prices. Plus, unlike previous Kindle devices, it’s intended to be a fully functional tablet, not merely an e-Reader. Even if Amazon is selling at or just below cost, they are no doubt expecting to make it up with additional revenue down the road from new Kindle book sales. And this is the secret sauce for the price point. HP had no plan beyond selling the hardware. Sure, they would have loved to have leveraged those TouchPad sales into additional software sales down the road, but the fact is HP is not an Android developer. They don’t have anything that the typical customer links to tablet software. So the ridiculously low price HP is offering their units for is unsustainable in the long run. Amazon does have that software in addition to their own Android app store. It remains to be seen how popular their app store will be with developers and purchasing customers, but it’s definitely a plus. Gravy, really, since Amazon is going to be primarily counting on Kindle sales, not app store revenue, to sustain Kindle purchases.


Microsoft Is Watching You

The Guardian is reporting that a lawsuit was filed last Wednesday claiming Microsoft is tracking users of Windows Phone 7 devices even in situations when location information was purportedly disabled. In the article, and in the ensuing discussion about the case, Apple’s name was inevitably dragged into the fray, focusing on the hubbub that was brought forth in April concerning the ‘consolidated.db’ file which stored timestamped latitude/longitude values, sometimes as far back as a year. As Josh Halliday at The Guardian puts it:

The lawsuit follows mounting concern about how technology giants, including Apple and Google, record users’ private data. Microsoft, Nokia, Apple and Google were called before the US Congress in April to explain their privacy policies after security researchers uncovered hidden location-tracking software in iPhones. Google Android phones were subsequently found to gather location data, but required users’ explicit permission.

There’s nothing inherently flawed with the quote above. Yes, there was concern about the possibility of tracking by several large companies. Yes the aforementioned companies were called before Congress. But no further mention is made of how Apple closed things out. And I imagine things will be a bit different with Microsoft.

To begin with, Microsoft’s declaration in their letter to Congress reads similarly to Apple’s press release with regard to what each company states they collect. Essentially they both claim to only track approximate location in order to provide a better user experience. In both cases, a small portion of the entire database of known Wi-Fi and cell tower locations is sent to the phone in order to be prepared to quickly obtain a more precise GPS based location on demand. Both companies also state that they honor the disabling of location services by disallowing the dissemination of this information to apps on the device which make a location request.

The differences begin with how the outcry started in each case. For Apple, the existence of the database had long been known by those technically savvy enough to snoop around the iPhone’s internals and figure out what they were looking at. It wasn’t until Alasdair Allan and Pete Warden revealed an open source utility to fetch the database for your viewing pleasure that things were sent into damage control. Shortly thereafter, Apple issued their press release which stated, among other things:

7. When I turn off Location Services, why does my iPhone sometimes continue updating its Wi-Fi and cell tower data from Apple’s crowd-sourced database?  
It shouldn’t. This is a bug, which we plan to fix shortly (see Software Update section below).

It further added:

Software Update 
Sometime in the next few weeks Apple will release a free iOS software update that:

    • reduces the size of the crowd-sourced Wi-Fi hotspot and cell tower database cached on the iPhone,
    • ceases backing up this cache, and
    • deletes this cache entirely when Location Services is turned off.

In the next major iOS software release the cache will also be encrypted on the iPhone.

That was it for Apple. They would issue a free update that would cease even grabbing the cached data if you disabled location tracking, reduce the amount of cached data retained, cease backing it up if you did retain it, and delete the cache entirely if you disabled location tracking. Moreover, the next major iOS release encrypted it locally when it was stored. There were never any accusations of tomfoolery on Apple’s part.

In Microsoft’s case, the first sounding of the gong is the result of a lawsuit filed in Microsoft’s own backyard so to speak. Not simply an indication of something a techie found that was subsequently addressed but rather someone essentially throwing down with them. Of course, frivolous lawsuit are filed all the time, but I don’t see any advantage to be had here unless there is some truth to it. Even so, it’s an interesting distinction in terms of how the starting gun sounded.

So now we’re waiting to hear from Microsoft, to get their side of the story. Apple took 7 days to complete their response, and I imagine some of that time was spent with engineering, looking for the bug they spoke of. There was, I’m sure, time spent mulling over release dates, etc. We’re still within the same 7 day mark for Microsoft’s response, and they have at least indicated they will be responding though I figure that was a given. I wonder if they’ll admit it was a problem and indicate how they’ll be fixing it, or if they’ll take a more defensive posture. I’m guessing the latter. Regardless, I’ll be getting the popcorn and pulling up a chair. This ought to be interesting.


Virtual Goods Sales Evil?

If you play a game online these days, there is a more than even chance that you can obtain “things” in the game which can be found, earned, bought with in-game currency or traded for. Because any of these methods involve an investment of time, you, like many others, have probably wondered if perhaps you could pay someone else to just give you the “thing” in question. For many games the “thing” is in game currency. World of Warcraft players have many opportunities to purchase gold for their characters in the game, allowing them to participate actively in the auction houses online and easily afford some of the game’s more extravagant purchases like special mounts or training.

Second Life, an online virtual world where players can create content, build up virtual real estate and freely buy and sell virtual good between one another, actually has had a fairly robust economy as recently as 2011 Q2. League of Legends uses what is known as a freemium model. The core service is free while certain content, in the form of upgrades or reskins, is available through purchase. The list goes on; game developers have quickly latched onto the idea that they can charge customers for the flip of an electronic bit.

While there are many who think this is perfectly acceptable and even encourage the development of this model, there is a growing number of people who reject the idea of paying for virtual goods. They seem to accept that some games will require a subscription fee for ongoing access to the game content itself, but believe that it is unreasonable or illogical to purchase in game content with real money. Typically it comes down to possession of real property. If I buy a book, I have physical possession of that book. I can read it, shelf it, burn it, throw it at a burglar or build a shrine to it. Whatever I choose to do with it, it doesn’t matter whether the company that sold me the book, or the book’s author for that matter, are still around. Nothing prevents me from continuing to possess the book.

Things change when you buy a virtual item. Since World of Warcraft came onto the scene, it has seemed to be an unstoppable juggernaut in the MMORPG space, chalking up record breaking subscription numbers year after year. It has finally seen a decline in those numbers which may or may not be indicative of the eventual fall of the iconic franchise’s tenure at the top of the food chain. Regardless though, it seems commonly held wisdom that it too will eventually succumb to something that comes after, even if it is another Blizzard MMO. And when the servers for WoW are finally shut down, that epic Sword of Truthiness will no longer be available. And therein lies the argument of the naysayers opposing virtual good purchases. You aren’t buying anything tangible. The company might close its doors with little to no notice and that shiny electronic loot you had is now gone along with any characters you might have built up.

The problem with this thinking is being locked into the idea that virtual goods are like their real world counterparts. They are quite different and should be treated as such. This doesn’t mean that virtual goods have no value, just that we can’t expect to be able to treat them like we would something we can go down and buy at the furniture store. If we start with the mindset that in fact the two are different, with virtual goods having the greater risk of possibly disappearing in spite of our best efforts, then we begin to frame the discussion more reasonably.

Even so, the thinking goes, why pay permanent real money for something that is here today and gone tomorrow. To which I would respond by asking, “Did you enjoy your last movie?” Where is that movie now? Did you frame it and put it up in your living room? Did it drive you to the grocery store yesterday? Are you wearing it? Of course not. You paid for an experience. You paid to see the result of someone else’s work. How is this different from purchasing some item inside an online game? An artist created the model. A developer wrote the code. You are buying the experience. It is entertainment. For now at least. Perhaps some day we will conduct business in a virtual world like Second Life. Hold meetings, perform negotiations, interview prospective employees. When that happens, we may want to be able to build a virtual corporate office with virtual furniture. And we will likely pay for the privilege. In such a case, we still want that experience. We are decorating our world, virtual or otherwise, with the works of others. That is what you are paying for, and that is really no different than buying a book. The book, the physical pages, are simply a medium. It presents perhaps greater value because it is a flexible medium that won’t just disappear, though of course it is not without its disadvantages like requiring physical space and having mass which becomes a problem when you are moving. But what you are really buying in that case is the story or the information contained within the book. You could have gotten it as an e-book, or perhaps on CD. Same story. Same content. Different medium.

But what about account theft? It is true that in some cases, perhaps many depending on the virtual world in question, account theft is the source of virtual goods sold. Someone’s account is hacked, their currency transferred to another temporary account who then transfers the gold to one or more other handling accounts and is then itself deleted. Quickly, a buyer is found, pays for the in game currency, the real world money changes hands, the bandits then send along the stolen currency and move to the next “customer”. It is possible for such transactions to then be rolled back, but the real world currency exchange is already completed, often with little or no recourse to the person who paid. This is, however, symptomatic of other problems. There will always be those who try to game the system, real world or otherwise, taking advantage of their ability to lure unsuspecting victims into giving up their money. In many cases these people didn’t even realize that what they were doing was wrong or that they were dealing with criminals. The problem is not that virtual good transactions are responsible for these thefts. It is that these thefts are used as part of virtual good transactions. As these virtual worlds move toward more secure operating models or even handle the transactions themselves rather than leaving it up to some sort of black or gray market, the chances of theft will be drastically reduced. When the operators of these virtual worlds take an active hand in running this market, they can also avoid inflationary effects which ofttimes occur as a result of currency purchasing.

All in all, there isn’t really anything inherently wrong with purchasing or selling virtual goods with real world money. The philosophical opposition to the practice is rooted in mistaking physical and virtual attributes for one another and how things were done early on in the burgeoning marketplaces. While there are adults who have never lived a single day where the Internet wasn’t around and available to operate within, we are as a society still grappling with the changes that our online activities are creating in our lives.