It was the summer of 2006. Facebook was just starting to take off. Twitter launched. I had just graduated from Carnegie Mellon. My friend Don and I both had great prospects to start our tech careers. The road ahead of us presented two paths: the safe route (entering the corporate world) or the risky route (doing something of our own). We chose the latter. We packed our bags, loaded a U-haul and drove to California.
We were both very much drawn to the social explosion happening on the web. After kicking around a bunch of ideas we came up with the idea of Streamy. The idea was to build a platform for content search and discovery with integrated social features like chat, likes and comments. We raised some seed money from friends and family, and started Streamy in our small apartment in Hermosa Beach.
SQL to HBase
Don worked out the frontend while I focused on the backend. Streamy was initially built on PostgreSQL, powered by a master node with 16 cores, 64 GB and 15 x 146 GB 15k RPM drives. We soon realized the costs and limitations of a relational database management system - write speeds degraded with table sizes and the required high-end hardware was insanely expensive. We saw what Google was doing with MapReduce and Bigtable, and that encouraged us to move our data platform to Apache Hadoop and HBase. Streamy 2.0 was built entirely on Hadoop and HBase with custom caches, query engines and a data API. Our Hadoop cluster was powered by 10 low-spec nodes with 4 cores, 4 GB, and 2 x 1 TB 7.2k RPM drives. It was fast, scalable and dirt cheap.
We were doing some pretty cool things technologically: OLTP and OLAP on the same cluster, crawling, analytics, and serving on the same database. We soon realized it was not easy. By 2008, we were six employees and most of the team was doing database development of some kind. The amount of time we spent on HBase ultimately distracted us from Streamy. And then in late 2008 the recession came and hit us hard. After a few more months of making a go at it, we decided to exit. The rest is history.
In 2010, Facebook hired me to help build their real-time messaging platform on HBase. Facebook invests a lot of smart people and infrastructure to solve their problems with Hadoop, and is very successful doing so. However, the majority of startups don’t have such resources and many large companies simply don’t invest. This triggered the idea of Continuuity. I started Continuuity in 2012 with the goal of making the power of Hadoop, HBase and Big Data technology accessible to all developers.
Hadoop is a tremendously powerful piece of technology, but it’s low-level infrastructure and not targeted at application developers. Not only is the development itself difficult, but debugging, deploying, and managing an app on Hadoop is complex, with much responsibility left to the developer. This is where Continuuity comes in. Continuuity Reactor lets any Java developer easily build Big Data applications on Hadoop that can be instantly deployed on-premise or to the cloud.
Building for the Continuuity Platform
What would have happened if the Continuuity platform existed when we started Streamy? I used to ask this question all the time. The whole purpose in starting Continuuity was to solve the problems of developers like you and me, so we decided to give it a try. Don (Co-Founder of Streamy), Nitin (Co-Founder of Continuuity) and I built Streamy Lite part-time over the course of just 2 weeks. We had a great time building it and look forward to sharing it with everyone. Stay tuned as we add new features and discuss the implementation.
We believe all Java developers can build cool, data-intensive apps using Continuuity Reactor. Download the free beta version of our SDK and take it for a spin - we look forward to hearing your feedback.
CEO and Co-Founder, Continuuity