Sunday, May 31, 2009

DigitalChalk at ASTD

DigitalChalk is going to be giving away $300 over Twitter at the ASTD 2009 Conference in Washington DC this year.  If you are at the conference and have a Twitter account, you can play.  Watch crazy man Josh in this video...



I heard that they were at the White House yesterday and President Obama asked how he could play, but they sadly had to turn him down since he isn't going to be present at the conference.  Oh well, maybe next time.

The goal of the game will be to figure out a word or phrase that will be on the back of a bunch of shirts running around the conference.  If you want to be a live participant in the game, go see Josh and Tony at booth 1519 and make sure that you follow DigitalChalk's Twitter account http://twitter.com/digitalchalk .  More information is available on Tony's blog.

Wednesday, May 27, 2009

In the Works at DigitalChalk

There are a lot of exciting things happening at DigitalChalk right now.  The development, operations, and quality teams are very hard at work on a new release of the product coming very soon.  Code named "Einstein", this release is packed full of feature requests from our customers.  While we wrap up the open tickets and complete QA, I thought I would start giving a sneak peak at some of the features that are going to be included by writing about them here in my blog over the next couple of weeks.

We have done a lot of work on the interface of DigitalChalk in the Einstein release.  Immediately you will notice that we have changed the look to be much more streamlined.

We had a couple of goals in moving this direction.  First of all, we really wanted to be able to provide much more information on a single page to the instructor or student.  This is really a challenge because we needed to balance between a page that feels cluttered and has too much information and keeping it "too clean" where the information cannot be found on the same page without navigating away.  It is also important to us that Einstein is compatible with a wide range of browsers.  Those of you that have done any development at all on the web know what a pain that can be.  Something that works in Internet Explorer will not work in Safari, and something that works in Firefox will not work in Internet Explorer and that isn't even taking into account all of the different versions of the browsers.  Sometimes this feels like a loosing battle for a complicated site.  Page rendering time has also been an area that we have devoted significant resources toward.  We are seeing an improvement now of over 500% on some pages!  I will be including more screenshots in the future of various parts of the site as I talk about specific changes.  We are all very excited about the changes and can't wait to push them out to you.

Monday, May 25, 2009

Apple to Build Data Center in NC?


News hit the street this weekend that Apple could be considering North Carolina as its next data center location. Just a couple of years ago Google selected Lenior, North Carolina as a location for a $600 million dollar data center and Apple may be joining them in the Tarheel state. The story is that the North Carolina legislature is offering large tax breaks to Apple in order to attract them to the area. I am happy to see that we are starting to think a little more about the types of jobs that will sustain the economy in the future. Computing power will always be needed and the demand for it is every growing. Technology will continue to drive much of the innovation that is occurring today and I welcome more of the support of that here in my home state. It has been sad to see the textile and furniture industry cause so much job loss and heartache as it has moved elsewhere and overseas, but it is time for us to look to the future and continue to reinvent ourselves. Another data center will drive more need for bandwidth and reliable power and will continue to draw more technology jobs this direction. It would thrill me to see North Carolina, especially Western North Carolina, become the Silicon Valley of the east. We are a long way from that now, but let's push forward and look ahead. Come on Apple, we are ready for you.

Thursday, May 21, 2009

Amazon Import/Export

After a recent talk that I gave on cloud computing, one of the attendees contacted me with some questions about the "safety" of the data and also wanted to talk about vendor lock-in.  It is no secret that I am a fan of the Amazon Web Services cloud platform and so it followed that these questions all had to do with the way Amazon stores the data.  While these are typical questions, today the second question became much easier to answer.  Amazon Web Services announced the availability of AWS Import/Export.  Quite simply, Amazon is offering its customers a very easy way to ship a disk of data to them and they will push it into S3 to your specifications or grab your data out of S3 and put it on the disk for you.  This is very attractive because it can take days to actually move hundreds of GB on your office network into or out of S3 because of the simple limitations of bandwidth at the average workplace.  I am not a fan of vendor lock-in and have thought long and hard how to avoid it.  Any code that we write storing to and from Amazon's infrastructure is isolated enough so that we can switch it to another provider by reimplementing that single area. But, the fact still remains that it would take weeks to move all of our data and it would have to be done over an extended period of time. A strategy for this is to move data between two cloud providers instead of bringing it down locally and putting it back up. While moving over a backbone is still going to be faster, it will not be optimal.  With AWS Import/Export you are one step closer.  For $80.00 and $2.49 per hour, you can currently have all of the data on a disk pushed into Amazon's cloud and the export facility is coming soon.  I commend Amazon for providing so many tools and conveniences.

Monday, May 18, 2009

Amazon Releases New Cloud Computing Services

One of the big draws to the cloud is its ability to scale with your application.  That has become much easier with Amazon Web Services today.  Early this morning the Amazon Web Services team launched three new services: CloudWatch, Auto Scaling and Elastic Load Balancing.  Combining the use of these three services allows a user to configure and scale their application based upon information gathered by CloudWatch.  These are important additions to the Amazon Web Services offerings because it helps take more of the coding and configuration work away from the developers and system administrators.  This is a key benefit of cloud computing that AppEngine from Google and Azure from Microsoft have built in and they have kept completely transparent to the developer.  While Amazon has not made it completely transparent with the release of these services, it is a great step and may be exactly the middle ground that is needed.
CloudWatch allows you to monitor CPU utilization, data transfer and disk usage, request rate and traffic to your EC2 instances.  Based up on the information that CloudWatch gathers, you can set triggers that will look at that data over a time period and allow you to use the Auto Scaling to automatically add or remove EC2 instances to the specific group of machines working on a particular task.  Finally the Elastic Load Balancing helps you distribute the traffic coming into your application to your EC2 instances.  This is a welcome addition as it accomplishes fault tolerant load balancing for us without the cost of having to setup several HAProxy instances.  So, even though we incur the costs of using the new Elastic Load Balancing Service, it quickly pays for itself because we are able to remove our own load balancing configuration on EC2.  I am excited to see these new services finally go beta to the public and I am looking forward to more.

Monday, May 4, 2009

SpringSource Aquires Hyperic


It certainly isn't as big of a story as Sun being acquired by Oracle, but it is worth noting.  SpringSource has announced that they have acquired Hyperic.  SpringSource is the company that is the driving force behind the Spring Framework which is arguably the most widely used open source Java framework in enterprise software today.  Hyperic provides a software suite for monitoring applications and servers and they have recently been dabbling in providing some of these services for the cloud.  The team up of these two vendors is especially interesting because they both offer some great services and tools in through open source.  This partnership could really allow even more granular visibility into the Spring stack for monitoring.  It will be interesting to watch and see if we start to see a move to OSGi component monitoring especially in the context of the SpringSource dm Server.  I hope to see great things out of this and expect that there will be a lot of value there for the open source community.  We should watch for more tools for monitoring applications in the cloud as well from this pair.