Uber's Michelangelo vs. Netflix's Metaflow

  Uber's Michelangelo vs. Netflix's Metaflow Michelangelo Pain point Without michelangelo, each team at uber that uses ML (that’s all of them - every interaction with the ride or eats app involves ML) would need to build their own data pipelines, feature stores, training clusters, model storage, etc.  It would take each team copious amounts of time to maintain and improve their systems, and common patterns/best practices would be hard to learn.  In addition, the highest priority use cases (business critical, e.g. rider/driver matching) would themselves need to ensure they have enough compute/storage/engineering resources to operate (outages, scale peaks, etc.), which would results in organizational complexity and constant prioritization battles between managers/directors/etc. Solution Michelangelo provides a single platform that makes the most common and most business critical ML use cases simple and intuitive for builders to use, while still allowing self-serve extensibi...

Docker on RPi

Here are a few important commands I've learned about docker so far.  I've been working on getting Docker running on RPi which now has support for it.  It's a little different on RPi because RPi boards use the ARM architecture rather than the intel architecture, which most all other CPUs use.  That means that although Docker containers are supposed to be able to run on any machine with Docker installed, it's not true in the case where the container was built on intel and run on ARM or vice versa.  So you have to look for the right types of images to build on top of.  Anything on hub.docker.com that includes 'rpi' or 'arm' should usually be able to run on the RPi.  For example: https://hub.docker.com/r/joaquindlz/rpi-docker-lamp/

Docker run
Docker start
Docker stop
Docker ps

-t tag
--name name of image

-d run container as daemon in background

list running containers:
# docker ps

list all containers:
# docker ps -a

list all images:
# docker images

stop and remove all docker containers:
# docker stop $(docker ps -a -q)
# docker rm $(docker ps -a -q)

stop and remove all docker images:
docker rmi $(docker images -q)

each container has a main process, and will die when that process ends.  But you can run loads of containers at the same time.

backup a mysql db called 'wordpress' which is on the localhost:
mysqldump --add-drop-table -h db01.example.net -u dbocodex -p wp > blog.bak.sql

add current user to www-data group:
sudo gpasswd -a ${USER} www-data

run with privil'd access to memory (for GPIO stuff)
docker run --privileged ....

run shell within container:
docker exec -it bash

package caching break apt-get installs, fix by :


commit changes in a container to an image: https://www.liquidweb.com/kb/how-to-commit-changes-to-docker/
docker commit

backup mysql db to .sql file:
mysqldump -u -p >

restore mysql db from .sql file:
> echo "CREATE DATABASE ;" | mysql
> mysql -u -p <

remove sensitive data(a file and it's history) from a git commit:

Resetting Wordpress password :
through DB: "Through MySQL/MariaDB Command Line"

copy file from container to host:
docker cp :/path/to/file path/to/copy/to




Comments

Popular posts from this blog

ChatGPT - How Long Till They Realize I’m a Robot?

Architectural Characteristics - Transcending Requirements

Laws of Software Architecture