Emily is the developer toolkit for helping machine learning engineers and data scientists in developing and deploying machine learning microservices. Put simply, Emily is a CLI tool that orchestrates everything from project initialization to final deployment in a production environment.
Creating an Emily microservice starts with selecting one of several machine learning templates, all containing production-ready REST or gRPC APIs preconfigured to best-practice standards in terms of scalability, performance, and resilience, and transparency.
Emily mounts the developer’s code editor onto a Docker container to ensure a fully containerized development setup. In fact, the developer will write code, test, and run their service from inside the very Docker container that will later go into production.
If it runs locally, it will run in production!
Because the development environment is fully containerized, developer-setup conflicts, differences in OS and driver versions are a thing of the past; If the service runs locally, it will run in production.
Getting to production can normally be a hassle, but Emily resolves this hassle. The Emily CLI provides simple-to-use commands for deploying Emily microservices to arbitrary deployment targets in a safe, consistent, and reproducible manner.
Emily provides automated deployments to any SSH-accessible self-hosted VM solution, as well as Kubernetes clusters (with explicit support for Azure AKS and Azure Container Registries if so desired).