CASSANDRA 3.3 RELEASED
Apache Cassandra 3.3 was released last week. As per the Tick Tock release schedule, this release is focused on bug fixes and no new features were introduced. For practical purposes, consider this a bug fix release to Cassandra 3.2. All told there were almost 50 bugs fixed in this release. Many of the bugs fixed in this version also applied to Cassandra 3.0.3, which was also released last week. With any Cassandra release, it’s a good idea to read the Changelog and News before upgrading.
CASSANDRA SECONDARY INDEX PREVIEW #1
If you’ve looked into using Cassandra at all, you probably have heard plenty of warnings about its secondary indexes. If you’ve come from a relational background, you may have been surprised when you were told to create multiple tables (materialized views) instead of relying on indexes. This is because Cassandra is a distributed database, and the impact of doing a query that hits your entire cluster is you lose your linear scalability.
ASYNC PYTHON AND CASSANDRA WITH GEVENT
Introduction Building a web app relying on database calls with CPython (the standard Python distribution) is pretty easy, but can suffer from performance problems. Python itself isn’t particularly fast, and in 2.x, it’s concurrency story is especially weak. For starters, there’s the dreaded GIL. The GIL prevents us from taking advantage of multi core systems, so even if we use try to use threads we’re missing out on their main performance benefit, which is parallel computation.
CASSANDRA 3.2 OVERVIEW
The 3.0 release of Apache Cassandra marked an important milestone. One of the biggest updates was CASSANDRA-8099, the JIRA to modernize the storage engine. It was also the first release in the new Tick Tock cycle, which lands a new release of Cassandra every month. Even .x numbers (such as 3.2) are feature releases, and odd .x numbers (such as 3.1) are bug fix releases. Cassandra 3.2, released about a week ago, is the first feature release following 3.
FRANKDUX RPC PREVIEW #1
In my previous post, I briefly mentioned FrankDux, a new project I’m working on. FrankDux is a framework for quickly building RPC microservices in Python. This is a preview of it’s functionality and subject to change. A goal of FrankDux is to provide a means of building stateless microservices that’s as easy as working with Flask or Bottle, but also the conveniences of Cap’n Proto, of which I’m a huge fan.
KILLRANSWERS STATUS UPDATE, AND INTRODUCING FRANK DUX
In a previous post, I introduced a new project, KillrAnswers. I had originally planned on writing KillrAnswers using Rust, leveraging the Cap’n Proto library for RPC and object serialization. I’ve had some time to think about this, and decided to switch back to Python. I also started my own RPC project, FrankDux, based on ZeroMQ and MessagePack for object serialization instead of Cap’n Proto. Let’s get the obvious question out of the way - why not use Rust?
USELESSDB
I have built a completely useless database. I had a couple flights across the country this week so I decided to test some ideas in Rust. If you’re not yet familiar with Rust, it’s a systems language focusing on performance, safety, and concurrency. I’ve really enjoyed using it so far and every day it feels much more natural. I’ve been thinking about database internals a lot recently, and decided to see what it would be like to implement a typed database in Rust using byte buffers.
RAMP MADE EASY - PART 2
Introduction In my previous post I introduced RAMP, a family of algorithms for managing atomicity on reads across distributed database partitions. The first algorithm discussed was RAMP-Fast, which is designed to perform with as few network round trips as possible at the cost of storing significant amounts of metadata. I suggest reading my first post if you aren’t familiar with RAMP as I’ll be referring to some of the concepts throughout this post.
RAMP MADE EASY
Introduction In this post I’ll introduce RAMP, a family of algorithms for performing atomic reads across partitions when working with distributed databases. The original paper, Scalable Atomic Visibility with RAMP Transactions, was written by Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein and Ion Stoica, of UC Berkeley and University of Sydney. Peter has graciously reviewed this blog post to ensure its accuracy. As part of the overview, I’ll explain the RAMP-Fast algorithm, the first of 3 algorithms covered in the paper.
INTRODUCING KILLRANSWERS
The last few months have been a non stop whirlwind of traveling and speaking. I’ve been very fortunate to have spoken at Strata New York, give a couple sessions at the Cassandra Summit, and even had a few minutes on stage for the Cassandra Summit keynote (I’m at minute 22 with Luke Tillman). When I have time, I end up hacking on random projects. For example, a couple months ago I was working on a recommendation engine for KillrVideo.