Continuuity Loom 0.9.7: Extensible cluster management

Jun 10 2014, 11:12 am

Derek Wood is focused on DevOps at Continuuity where he is building tools to manage and operate the next generation of Big Data applications. Prior to Continuuity, Derek ran large scale distributed systems at Wells Fargo and at Yahoo!, where he was the senior engineering lead for the CORE content personalization platform.

In March, we open sourced Continuuity Loom, a system for templatizing and materializing complex multi-tiered application reference architectures in public or private clouds. It is designed bottom-up to support different facets of your organization - from developers, operations and system administrators to large service providers.

After our first release, we have heard a lot of great things about how people are using Continuuity Loom and have also heard about features that have been missing. After taking in all the feedback, we are excited to announce the next version of Continuuity Loom codenamed Vela. The theme for this release is “Extensibility” as we have been working towards making Continuuity Loom integrate with your standard workflows for provisioning and managing multi-tiered applications, as well as making it easier to extend Continuuity Loom itself to your needs.

Highlights of Loom 0.9.7

  • Ability to register plugins to support surfacing of plugin defined fields for configuring providers and automators

  • Support for finer grained dependencies

  • Cluster life-cycle callback hooks for integrating with external systems like metering, metrics, etc.

  • Extend your cluster capabilities by adding new services to an existing cluster or reconfiguring existing services

  • Smart support for starting, stopping and restarting services with dependencies automatically included during service operations

  • Personalizable UI skins

  • More out-of-box Chef cookbooks including Apache Hive™ recipes and recipes for supporting Kerberos enabled clusters

Detailed overview of Loom 0.9.7 features

Cluster Reconfiguration

How many times have you had to change some configuration setting of a service on a cluster and had to remember to restart the right services in the right order for the change to be effective? The reality is, when making changes in multi-tiered application clusters, there are a lot of things to remember. Wouldn’t it be simpler if you could change the configuration that you want and let the system figure out everything else for you, ensuring the change you just made is active without any hassle? With this release of Continuuity Loom, you don’t have to worry any more about what services need to be restarted. Continuuity Loom automatically figures out service stop and start order based your service dependencies. You can find more information about how this is done here.

Add missing and manage cluster services easily

Let’s take a concrete use-case: let’s say your administrator has configured a template that allows you to create a Hadoop Cluster (HDFS, YARN, HBase, Zookeeper) with Kerberos security enabled. As a new user, you would like to try building a basic cluster with just HDFS and YARN until you are ready to add more. In that case, Continuuity Loom provides an easy way to remove services that you don’t need during creation time and then subsequently add them back to the live cluster with just a few clicks. With this new release, users will now have the ability to stop, start, and restart services without having to worry about what additional or dependent services need to be restarted.

Plugin Registration

In line with our theme of extensibility, we wanted to ensure that developers are able to write custom Automator and Provider plugins. As part of this, plugins now define the fields they require, which get surfaced in the UI and passed to the plugin during task execution. Particularly for Provider plugins, this allows you to provide different options for provisioning clusters. For example, Rackspace requires a username and API key, while Joyent requires a username, key name, and private key file. It is now possible to write your own plugins and describe the specified fields required. With the addition of this feature, you can also write support at the API level for any container (like Docker), OS, or cloud provider. You can learn more about this feature here.

Finer Grained Dependencies

Prior to this release, all service dependencies were applied at all phases of cluster creation. This created unnecessary solution space exploration and execution of dependencies when they didn’t make sense. This release includes a feature called fine grained dependency management for services. This feature allows Continuuity Loom administrators to specify required or optional service dependencies that apply during runtime or install time (applied only when the service is installed and available on the machine). This is specified during service creation so users don’t have to worry about it. It provides granular control over the deployment of services and opens up support for HA Hadoop clusters and secure hadoop clusters, which require external Kerberos Key Distribution Centers (KDCs). You can learn more about this feature here.

Cluster life-cycle callback hooks

Often times, you need the ability to integrate Continuuity Loom with other workflows that exist in your organization. This feature allows the Continuuity Loom administrator to configure a callback class to insert your own custom logic before and after cluster operations. Out of the box, Continuuity Loom provides a HTTP callback implementation for the interface ‘ClusterCallback’. You can use this feature to integrate the cluster life-cycle with monitoring systems, metering systems, and even your favorite chat application to alert when clusters are created, deleted or upon any failed operations. You can learn more about this feature here.

Personalized UI skins

When you install Continuuity Loom on-premise wouldn’t you like the ability to change the color scheme and logo to make it fit well with the other tools in your organization? With this release you will have the ability to change the color, skin and logo of your Continuuity Loom installation.

This has been an exciting release for us. Check out the Release Notes for more details about this release. Give it a spin by downloading the standalone version for your laptop and visiting the quickstart guide to spin up clusters in cloud.

Help us make Continuuity Loom better by contributing and please report any issues, bugs, or ideas.

Coming soon - Continuuity Loom for free in the Cloud

Be sure to sign up at tryloom.io to be among the first to know when the free, cloud-based version of Continuuity Loom is available.

Comments

Continuuity & AT&T Labs to Open Source Real-Time Data Processing Framework

Jun 3 2014, 10:32 am

Nitin Motgi is Co-founder of Continuuity, where he leads engineering. Prior to Continuuity, Nitin was at Yahoo! working on a large-scale content optimization system externally known as C.O.R.E. He previously held senior engineering roles at Altera and FedEx.

Today we announced an exciting collaborative effort with AT&T Labs that will facilitate the integration of Continuuity BigFlow, our distributed framework for building durable high-throughput real-time data processing applications, with AT&T’s streaming analytics tool, an extremely fast, low-latency streaming analytics database originally built out of the necessity for managing its network at scale. The outcome of this joint endeavor is to make a new project, codenamed jetStream, available to the market as an Apache-licensed open source project with general availability in the third quarter of 2014.

Why are we combining our technologies?

We decided to bring together the complementary functionality of BigFlow and AT&T’s streaming analytics tool to create a unified real-time framework that combines in-memory stream processing with model-based event processing including direct integration for a variety of existing data systems like Apache HBase™ and HDFS. By combining AT&T’s low-latency and declarative language support with BigFlow’s durable, high-throughput computing capabilities and procedural language support, jetStream provides developers with a new way to take in and store vast quantities of data, build massively scalable applications, and update your applications in real-time as new data is ingested.

Moving to real-time data applications

When you look at the wealth of data being generated and processed, and the opportunity within that data, giving more organizations the ability to make informed, real-time decisions with data is critical. We believe that the next commercial opportunity in big data is moving beyond ad-hoc, batch analysis to a real-time model where applications serve relevant data continuously to business users and consumers.

Open sourcing jetStream and making it available within Continuuity Reactor will enable enterprises and developers to create a wide range of big data analytics and streaming applications that address a broad set of business use cases. Examples of these include network intrusion detection and network analytics, real-time analysis for spam filtering, social media market analysis, location analytics, and real-time recommendation engines that match relevant content to the right users at the right time.

New developer features

By using jetStream, developers will be able to do the following:

  • Direct integration of real-time data ingestion and processing applications with Hadoop and HBase and utilization of YARN for deployment and resource management

  • Framework-level correctness, fault tolerance guarantees, and application logic scalability that reduces friction, errors, and bugs during development

  • A transaction engine that provides delivery, isolation and consistency guarantees that enable exactly-once processing semantics

  • Scalability without increased operational cost of building and maintaining applications

  • Develop pipelines that combine in-memory continuous query semantics with persistent, procedural event processing with simple Java APIs

For more information, please visit jetStream.io.

Comments

What do you do at Continuuity, again?

Nov 20 2013, 12:10 pm

As Continuuity gets more traction, my friends ask me about what I do at Continuuity. The short answer is - we’ve created a platform which makes building Big Data applications easier. Let me try to give you more details with a short example. Let’s imagine you need to implement an app.

The app

This example app is very simple. I will not reason why one should implement it. The point of the example is to let me walk you through the developer experience of implementing a Big Data app using Continuuity Reactor.

Read More

0 notes

Comments

Strata + Hadoop World 2013: My Perspective

Nov 6 2013, 5:34 pm

“Tech Geeks. Your chariot awaits: SFO->NYC”. As I drove past this billboard on the 101, I felt very excited. The entire team had been working very hard for the past 2 months. This was the moment of truth. We were going to announce the general availability of Continuuity Reactor 2.0 and our strategic relationship with Rackspace. The team would get to meet actual customers and developers, and see Reactor 2.0 in action in the real world.

Read More

0 notes

Comments

My Story: From Streamy to Continuuity

Oct 22 2013, 9:01 am

It was the summer of 2006. Facebook was just starting to take off. Twitter launched. I had just graduated from Carnegie Mellon. My friend Don and I both had great prospects to start our tech careers. The road ahead of us presented two paths: the safe route (entering the corporate world) or the risky route (doing something of our own). We chose the latter. We packed our bags, loaded a U-haul and drove to California.

Read More

0 notes

Comments
blog comments powered by Disqus