Showing posts from October, 2019

ChatGPT - How Long Till They Realize I’m a Robot?

I tried it first on December 2nd... ...and slowly the meaning of it started to sink in. It's January 1st and as the new year begins, my future has never felt so hazy. It helps me write code. At my new company I'm writing golang, which is new for me, and one day on a whim I think "hmmm maybe ChatGPT will give me some ideas about the library I need to use." Lo-and-behold it knew the library. It wrote example code. It explained each section in just enough detail. I'm excited....It assists my users. I got a question about Dockerfiles in my teams oncall channel. "Hmmm I don't know the answer to this either"....ChatGPT did. It knew the commands to run. It knew details of how it worked. It explained it better and faster than I could have. Now I'm nervous....It writes my code for me. Now I'm hearing how great Github Copilot is - and it's built by OpenAI too...ok I guess I should give it a shot. I install it, and within minutes it'

How to Frame Metric Collection

Depending on the type of software development you’re doing, it can be tough to figure out what metrics you need to collect. An iterative process (aka fancy words for trial-and-error) will work to get you to where you need to be eventually, but along the way the MTTR of outages will suffer and you might lose users/revenue. There’s a simple way to think about metrics that will help you build an intuition on what to monitor and what to measure. It’s this: Measure the business, measure the software. But don’t conflate the two. Measuring the business is critical to being able to notify and escalate to the proper personnel when there is an outage. Business metrics include things that impact your bottom line - user signins per minute, items added to the cart per second, items sold per day. Anything that directly and immediately affects customers is a business metric. Software metrics are signals about how your software is running. There are 3 categories of software metrics: OS me

Cassandra’s Data Model

I did a previous post on Cassandra but that one focused on its fault tolerance, network architecture and scalability. This one focuses on the structure of data stored in Cassandra. Cassandra is a wide columnar data store.  Logically, you can think of data stored in Cassandra like a compound index in a conventional SQL data store. If you know the row key and the column names you want, you can get the data and you need. If you are ok searching through ALL the columns, then you just need the row key. And if you don’t have either but you’re data is stored somewhere in a Cassandra table, you’re in for an expensive full table scan. However, different from SQL data stores is that you can essentially have an unlimited number of columns and each row can choose to have whatever columns it wants. That’s why it’s called a wide-columnar data store - you could have millions of columns if you wanted to! Within each row, the columns are stored in sorted order, so finding a specific column can be d

The case for caching

Though the concept of caching seems quite simple to most engineers, there is actually a lot of intriguing nuance to it. The choices for caching and the reasons to use it vs. not are varied, but let’s try to simplify. First thing to ask is ‘what for?’ Caching is useful when you want decrease latency and/or decrease load on components of your system. You can use it in places where there is a separate, durable source of truth and it’s not terrible if the data in the cache expires or is lost some other way. Caches are not good tools for request buffering or source of truth data - data will be lost from time to time. Second thing to ask is ‘where should we put it?’ And there are essentially 4 options: the end users client/browser, a CDN, a reverse proxy in front of your own web servers, or on your own web servers. If the data you’re caching is specific to the user and not too large, it can be put into the client/browser - this is the most effective approach for latency and for reducin

Cassandra: A Case Study

Cassandra was developed at Facebook and some would say it's an intersection between Amazon's Dynamo and Google's BigTable.  It's an open source distributed active-active NoSQL column-oriented data store with tuning capabilities that optimize for write-heavy workloads.  It uses quorum reads and writes to balance consistency with availability and automatically manages replication of data - if a server fails, there is no availability loss assuming you've configured the right number of replicas.  And when a new server is brought in to replace it, all Cassandra needs to know is the ip of the server it's replacing, and it'll manage getting the new server up to speed.  Since Cassandra uses append-only writes, one of the tradeoffs made is that it doesn't allow for fast deletions - and deletions can actually increase the size of the data until compaction time. Cassandra is different from what we were taught to be active-active databases. Most active-active setu