Cloud66 vs Opsworks, 1 year later

Cloud 66 is a managed, BYOS (bring your own server) PAAS that runs on many cloud platforms,  we have been using them for our flagship produce, for about 2 years now.

Opsworks is a service provided by AWS that can manage EC2 or bare metal servers, with some handy pre built chef recipes. We have hosted our time keeping software with Opsworks for about 1 year now.


  • Manage Servers outside of their domain. Opsworks can manage bare metal, so can Cloud 66
  • Provides 90% of devops for you.
  • Provides handy management interfaces and monitoring of servers
  • Click to deploy

Cloud 66

Has been great for us, minus a major security issue early in their life, (delete all managed servers, yea it was bad) they have performed very well.  99% of rails just simple works on their platform, be it setting up the whenever gem for cron tasks, or sharding memcached between your servers it just works.  They tend to be a week or two behind bleeding edge releases which I feel is a very good balance between the latest and greatest and maintaining a stable production environment.

There service is built around “Stacks” or a set of servers built in the same manner with various add ons.  These stacks are meant to be replaced if you upgrade ruby or do anything major, and Cloud 66 provides methods to cleanly migrate and upgrade your stacks, IF you are using their managed backup.

All and all our time with them has been great,  with our only major complaint is the “nickel and diming”, most of the features are free but things like backup, even if you host it yourself costs money. Which I think is pretty poor practice,  a customer loosing data is VERY bad, and encouraging people to back up with there unmanaged server to AWS or something similar should not cost extra.


Is a whole different beast, it presents a surface of being easy to manage, but woah nelly is the learning curve high.  Using opsworks for anything beyond a rails getting started app you must know chef.  We use opswork to manage EC2 instances 2 app server 1 ELB + 1 RDS instance,  memchaced is installed on both app servers and shared between the two. Performance is great.  Managing, and deploying is not, very very little just works.  Any type of background jobs, cron scripts, imagemagick, gs, or anything beyond extremely basic will require custom chef scripts.

The service is also built around “Stacks” very similarly to cloud 66, you add various layers for your application, i.e. Rails, DB, Memcached. and opsworks will provision on new or existing servers.  Building servers tends to be a slow process about 20-25 minutes from the outgoing request, so don’t expect to scale so fast. Though it does provide methods to autoscale or scale based upon time which is very nice.

Opsworks, is a big boys tool, you need to know what you are doing but it can be a good blend between a fully managed PAAS and rolling your own.   My overall experience has been good, but painful at times, we are not a chef shop and have struggled with weird deployment issues.  Most of our grips are beyond the opsworks platform itself, like ELB returning a white page with no error in a 503 event, or deploys failing and bringing down the service.


Cloud 66 is a fantastic service that uses my favorite support tool, and clearly is here to make developers lives easier, providing PAAS on many different cloud platforms you get performance you need and the service that keeps you coming back.   Opsworks, is hosted chef, and that’s the end of the story, it will do very little for you if you don’t understand devops,  there is no “15 minute” getting started guide on this one, once it works it’s a solid tool, but you will save very little time in the devops arena.


Running Microsoft AD Primary Domain Controller on AWS

Everyone has aging servers, it seems they are old by the time they turn on.   Coupled with Microsofts complex licensing for server, when it came around that we needed some of the newer features naming custom certificates from Microsoft Certificate Authority we chose to spin up a simple EC2 server to make our lives easier vs jumping through hoops to upgrade our old server, or purchase new hardware.

This configuration requires a bit more than just an EC2 Instance.

  • A new Amazon VPC, if you run AD exposed to the public, you are insane.
  • A Direct VPN connection to our office
  • A Nat Instance for the private VPC to connect to the internet without having to pipe through the vpn.
  • A VPN connection, or a “bastion” instance to connect to the VPC if the primary VPN is down.
  • A security device, to act as a VPN endpoint on site. (Sonicwall TZ-210) in our case.

Below is what we came up withscilucent-ad

Bot the ELB and the Nat instance sit in a “Public” vpc subnet, while CA-1 sits in a private one.   Only the private subnet has access to the local network link in the direct VPN connection.

The Sonicwall providers monitoring via ping over the dual vpn connections to AWS and will attempt to rectify any issues by renegotiating the tunnel.  In reality this has been about 80% reliable.

For the AWS windows instance you must use an EBS backed instance or you will have a bad time.  Treat this like any other windows server you have, ensure your are backing it up properly, security is well configured and that you have a disaster plan in place.



Migrating to Heroku: WordPress Edition

Update:We have since migrated our systems to Pressable.

Starting the new year we decided to do some spring cleaning. We noticed we had a few instances running on rackspace that were doing next to nothing. Basically only hosting this site. Well it’s time to find a simpler host and close down some older (Ubuntu 10.10) servers.

1st finding a solution.

After a quick look around I found this I have played with this before for some clients but they had a few needs that could not be met. But this site is VERY simple, everything should work great.

2nd Moving your Site

For the purpose of this walkthrough I am going to assume a few things

  • You have an active s3 account with Access and Secret Keys handy
  • You have followed the installation steps for wordpress-heroku
  • You have a local install of Mysql and Postgres

What do we need to do.

  1. Migrate our sites DB toPostgres
  2. Migrate Theme
  3. Upload all wp-content/uploads to s3.

DB Migration Heroku has a fantastic guide here. I cheated a bit and migrated directly to the Heroku DB. Here is my config file (not the real passwords but this is what it should look like, my local environment is MAMP)

Once you have your config file set up for mysql2psql run from the directory containing your mysql2psql.yml file.
$ mysql2psql


Copy your theme to the wp-content directory of wordpress-heroku. Add it to version control and deploy.
$ git commit -a -m "Adding Theme"

$ git push heroku production:masterAt this point the site should be functional, assets should be broken but the admin should work.
FYI: Double check that the url is staying on the heroku url. WordPress may try to redirect you to the WordPress URL saved in the DB.You can add the below code to your wp_config.php to have wordpress respond the to any incoming host and ignore the setting in the DB.
define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST']); define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST']);


  1. Configure WPRO at Settings -> WPRO
  2. Log into and navigate to s3.
  3. Navigate to your bucket
  4. Ensure Permissions are set as such. (Everyone List)
  5. Upload the contents of wp-content/uploads to the bucket.


  • I have had some issues uploading media from new post window.
  • Using APEX domains on heroku is dangerous and can interfere with mail delivery.

I will update this as I test this more.