Posts

Showing posts with the label availability

Uber's Michelangelo vs. Netflix's Metaflow

  Uber's Michelangelo vs. Netflix's Metaflow Michelangelo Pain point Without michelangelo, each team at uber that uses ML (that’s all of them - every interaction with the ride or eats app involves ML) would need to build their own data pipelines, feature stores, training clusters, model storage, etc.  It would take each team copious amounts of time to maintain and improve their systems, and common patterns/best practices would be hard to learn.  In addition, the highest priority use cases (business critical, e.g. rider/driver matching) would themselves need to ensure they have enough compute/storage/engineering resources to operate (outages, scale peaks, etc.), which would results in organizational complexity and constant prioritization battles between managers/directors/etc. Solution Michelangelo provides a single platform that makes the most common and most business critical ML use cases simple and intuitive for builders to use, while still allowing self-serve extensibi...

CAP Theorem Explained

Image
When building large-scale software systems today, you have to make tradeoffs.  You can't have an ACID compliant data store with infinite storage/throughput/connections that's always available in any part of the world with super low latency where clients can read/write concurrently without any risk of inconsistencies that's free.  If you could, the problem would be solved and our industry could go build spaceships at SpaceX or retire and make sourdough every Sunday. Instead, we need to make tradeoffs.  Does our product/system need ACID semantics?  Is latency more important?  Can we allow certain types of data inconsistencies for a short time in favor of availability?  How much are we able to spend so that we don't have to sacrifice as much? These are some questions that everyone building a large-scale software system has to grapple with in the design phase.  A great way to begin your thinking is using CAP Theorem - or at least what it's slowly be...