Recommended: Sing it, brah! 5 fabulous songs for developers
JW's Top 5
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 7 of 7
Deploying Granny to Azure showed plenty of rough edges. Azure's Eclipse plug-in didn't work; in fact, it directed us to an EXE file, which obviously wasn't going to work on Linux. The Linux SDK also did not work. On Windows, deploying the application on Azure was only partially PaaS-like. Instructions for deploying the example "Hello, world" Web application include pointing the setup wizard to the local copy of your favorite application server and JDK. The app server is then merely copied to a Windows Server 2008 VM. After that, you can fairly easily have your application use Azure's SQL Server instance.
Conclusion. If you have legacy apps not based on .Net, then Azure probably won't be your first choice. However, with that hands-on approach to both customer service and security, Microsoft could go a long way.
We honestly expected a bit more from the Linux SDK and the Eclipse plug-in. Despite the talk of interoperability and all of the tweeting from OpenAtMicrosoft, Microsoft didn't shine here. Certainly, Microsoft has the wrong messaging on PaaS lock-in for our taste. That said, if you have a mixed infrastructure of .Net, Java, Ruby, Python, and PHP and can do some tweaking but prefer not to rewrite, Azure may be the best choice.
Red Hat OpenShift
Red Hat's PaaS offering, called OpenShift, is aimed at Node.js, Ruby, Python, PHP, Perl, and Java developers. OpenShift combines the full Java EE stack with newer technologies such as Node.js and MongoDB.
Differentiators. OpenShift runs Java applications on the JBoss Enterprise Application Platform (JBoss EAP), Red Hat's commercial distribution of JBoss. Red Hat considers Java Enterprise Edition 6 (Java EE 6) to be a compelling differentiator, along with allowing developers to choose the best tool for the job, whether it's Java EE 6, Ruby, Python, PHP, Node.js, Perl, or even a custom language runtime.
In the coming months, Red Hat will be launching the first commercial, paid, supported tier of the OpenShift service. Red Hat said it will also release an on-premises version for enterprises that can't run in the public cloud due to security, governance, and compliance restrictions.
Lock-in. "No lock-in" was one of the foundational principles used in the design and development of OpenShift, according to Red Hat. The company noted that OpenShift uses no proprietary APIs, languages, data stores, infrastructure, or databases, but is built with pure vanilla open source language runtimes and frameworks. This means, for example, that an application built with Python and MySQL on OpenShift will seamlessly port to Python and MySQL running on a stand-alone server or in another cloud (assuming the language versions are the same). Likewise, a JBoss Java EE 6 application running on OpenShift can be moved to any JBoss server.
Security. Red Hat publicly lists OpenShift's security compliance information. The company said that Red Hat's Security Response Team (the same team that continuously monitors Linux for vulnerabilities) is involved with the design and implementation of OpenShift and that the OpenShift OnLine PaaS service is continuously patched and updated by the OpenShift Operations team at the instruction of the Security team. Red Hat also noted that OpenShift runs SELinux, the security subsystem originally developed by the NSA.
Who's using it? Red Hat said a wide cross-section of companies are using OpenShift today, ranging from hobbyist developers to technology startups building their businesses in the cloud to systems integrators and service providers to Fortune 500 enterprises. The company noted that classic legacy applications that are running on mainframes or other legacy platforms are not great candidates for migration to a PaaS.
Because OpenShift is considered a "developer preview" -- Red Hat's term for beta or alpha -- the company didn't feel comfortable releasing any information about existing deployments.
How did it do? It was a lot more work than we expected to get Granny deployed to OpenShift. Swapping between the command-line deployment tool and the Web-based provisioning and management console lacked the user-friendliness of CloudBees or Cloud Foundry. The Red Hat Developer Studio plug-ins didn't work with our application out of the box. Ultimately, we had to edit a lot more descriptor files both inside and outside of the application than we did with other solutions.
Had we deployed a Java EE-compliant app, I'm sure OpenShift would have been friendlier. But when the command-line tool told me to run a command, then warned me that the command was deprecated, it left a bad taste in my mouth. This is truly a "developer preview" and rough around the edges.
Conclusions. If you're already developing JBoss applications, OpenShift may be a good fit. It's worth a preview now, but if you're looking to deploy to a PaaS today, it's not ready. Red Hat should continue to trumpet the Java EE compliance as a differentiating factor. However, even by 2006 when Andrew worked at JBoss, he noticed that most applications deployed in JBoss were written to the Spring Framework. Supporting Red Hat's existing customer base is all well and good, but greatness and business success will come from seamless deployment of applications developed by people who are not already in the Red Hat camp.
VMware Cloud Foundry
VMware bought SpringSource in 2009. Therefore, it isn't surprising that our "legacy" application, which was already based
on the Spring Framework, worked seamlessly on Cloud Foundry. Although Cloud Foundry is still beta, it was very polished and worked well.
Differentiators. A key differentiator is the native support of the Spring framework. According to VMware, Cloud Foundry was built in collaboration with the SpringSource engineering team to ensure a seamless development, deployment, and management experience for Java developers. VMware also noted that Cloud Foundry is "unique in its multicloud approach," allowing developers to deploy the same application, without code or architectural changes, to multiple infrastructures both public and private. In fact this isn't unique, as OpenShift is similar, but VMware is uniquely positioned to do it. Unlike CloudBees, Heroku, and Red Hat, VMware has built its own cloud rather than building on Amazon Web Services.
Lock-in. VMware addressed the question of lock-in to my satisfaction. Because the platform is open source and there's a broad ecosystem of compatible providers (examples include CloudFoundry.com, Micro Cloud Foundry, AppFog, and Tier3), developers can easily move applications between Cloud Foundry instances, both on public clouds or private infrastructures. VMware noted that in addition to the multicloud flexibility, this open source flexibility ensures that developers and customers aren't locked into one cloud or one platform. As proof, the company pointed me to a blog post on extracting data using the Cloud Foundry data tunneling service, which is far and above "You can dump it to CSV and port it yourself."
Security. We were unable to find any published documentation on security certifications (PCI, SAE, and so on) for Cloud Foundry. VMware pointed me to its User Authentication and Authorization service, which appears to be a single sign-on scheme based on OAuth2. This could be a helpful service for application developers, but government organizations and large companies are going to require VMware to provide documentation of security certs before migrating to its cloud.
Who's using it? Cloud Foundry is well positioned to meet the needs of companies that want a combination of public and private PaaS. Its focus on an ecosystem of Cloud Foundry providers is a strong point, especially with regards to lock-in. Cloud Foundry is clearly aimed at Ruby, Node.js, and JVM-based languages. If you have a more diverse technology base, this may not be your first choice.
VMware pointed me to several published case studies, including Intel, Diebold, AppFog, Cloud Fuji, and others.
How did it do? We installed the Eclipse plug-in, deployed the WAR, and changed nothing. In fact, the first time we deployed Granny, we accidentally deployed it configured with CloudBees' JDBC information. Cloud Foundry automatically detected our Spring configuration and reconfigured the database settings for our Cloud Foundry database. This kind of magic may make some people nervous, but it worked seamlessly.
Conclusions. Cloud Foundry "just worked" -- we did nothing to the application but install an Eclipse plug-in. What's not to love? For ops teams, there's also a command-line interface. Once this PaaS launches, depending on pricing and such, it will certainly be a viable choice for Java developers. We can assume that for Ruby, which Cloud Foundry is written in, you would have a similar experience. (We have also tested the Node.js interface, which was a little trickier but still very workable.)
Cloud Foundry worked great and was the most straightforward. We were so successful with the Eclipse plug-in that we didn't try the command-line interface. Of course the test wasn't perfectly "fair" in that the app was a Spring app in the first place, but the app was written to be run on a local Tomcat instance, yet it deployed seamlessly to the Cloud Foundry cloud. Considering much of the legacy that will move to the cloud is Java and most existing Java apps are written in Spring, we're excited to see Cloud Foundry launch.
This article, "Which freaking PaaS should I use?," originally appeared at InfoWorld.com. Follow the latest developments in cloud computing at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.