mlflow :机器学习生命周期平台
mlflow是一个开源平台,是专门建造的,可帮助机器学习从业人员和团队处理机器学习过程的复杂性。 mlflow着重于机器学习项目的完整生命周期,以确保每个阶段都可以管理,可追溯和可重复
mlflow的核心组件是:
- 实验跟踪:一组对日志模型,参数和结果的API进行ML实验,并使用交互式UI比较它们。
- 模型包装?:用于包装模型及其元数据的标准格式,例如依赖性版本,可确保可靠的部署和强可重复性。
- 模型注册表:一个集中的模型存储,一组API和UI,可协作管理mlflow模型的完整生命周期。
- 服务:无缝模型部署的工具,可在Docker,Kubernetes,Azure ML和AWS SageMaker等平台上进行批量和实时评分。
- 评估:一套自动化模型评估工具,与实验跟踪无缝集成以记录模型性能,并在视觉上比较多个模型的结果。
- 可观察性?:与各种Genai库和手动仪器的Python SDK追踪集成,提供更轻松的调试经验并支持在线监控。
安装
要安装mlflow Python软件包,请运行以下命令:
pip install mlflow
另外,您可以在不同的软件包托管平台上安装mlflow :
| PYPI | |
| 康达·福克 | |
| 克兰 | |
| Maven Central |
文档
可以在此处找到有关mlflow的官方文档。
在任何地方运行
您可以在许多不同的环境中运行mlflow ,包括本地开发,亚马逊萨吉式制造商,Azureml和Databricks。请参阅此指南,以了解如何在环境上设置mlflow 。
用法
实验跟踪(DOC)
以下示例使用Scikit-Learn训练一个简单的回归模型,同时启用mlflow的自动化功能用于实验跟踪。
mlflow
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_diabetes
from sklearn.ensemble import RandomForestRegressor
# Enable mlflow \’s automatic experiment tracking for scikit-learn
mlflow .sklearn.autolog()
# Load the training dataset
db = load_diabetes()
X_train, X_test, y_train, y_test = train_test_split(db.data, db.target)
rf = RandomForestRegressor(n_estimators=100, max_depth=6, max_features=3)
# mlflow triggers logging automatically upon model fitting
rf.fit(X_train, y_train)\”>
import mlflow from sklearn . model_selection import train_test_split from sklearn . datasets import load_diabetes from sklearn . ensemble import RandomForestRegressor # Enable mlflow \'s automatic experiment tracking for scikit-learn mlflow . sklearn . autolog () # Load the training dataset db = load_diabetes () X_train , X_test , y_train , y_test = train_test_split ( db . data , db . target ) rf = RandomForestRegressor ( n_estimators = 100 , max_depth = 6 , max_features = 3 ) # mlflow triggers logging automatically upon model fitting rf . fit ( X_train , y_train )
上述代码完成后,在单独的终端中运行以下命令,然后通过打印的URL访问mlflow UI。应该自动创建mlflow运行,该运行跟踪训练数据集,超级参数,性能指标,训练有素的模型,依赖项等。
mlflow ui
服务模型(DOC)
您可以使用mlflow CLI通过单行命令将记录的模型部署到本地推理服务器。请访问文档以获取如何将模型部署到其他托管平台。
mlflow models serve --model-uri runs:/ < run-id > /model
评估模型(DOC)
以下示例运行了具有几个内置指标的提问任务的自动评估。
mlflow
import pandas as pd
# Evaluation set contains (1) input question (2) model outputs (3) ground truth
df = pd.DataFrame(
{
"inputs": ["What is mlflow ?", "What is Spark?"],
"outputs": [
" mlflow is an innovative fully self-driving airship powered by AI.",
"Sparks is an American pop and rock duo formed in Los Angeles.",
],
"ground_truth": [
" mlflow is an open-source platform for managing the end-to-end machine learning (ML) "
"lifecycle.",
"Apache Spark is an open-source, distributed computing system designed for big data "
"processing and analytics.",
],
}
)
eval_dataset = mlflow .data.from_pandas(
df, predictions="outputs", targets="ground_truth"
)
# Start an mlflow Run to record the evaluation results to
with mlflow .start_run(run_name="evaluate_qa"):
# Run automatic evaluation with a set of built-in metrics for question-answering models
results = mlflow .evaluate(
data=eval_dataset,
model_type="question-answering",
)
print(results.tables["eval_results_table"])\”>
import mlflow import pandas as pd # Evaluation set contains (1) input question (2) model outputs (3) ground truth df = pd . DataFrame ( { \"inputs\" : [ \"What is mlflow ?\" , \"What is Spark?\" ], \"outputs\" : [ \" mlflow is an innovative fully self-driving airship powered by AI.\" , \"Sparks is an American pop and rock duo formed in Los Angeles.\" , ], \"ground_truth\" : [ \" mlflow is an open-source platform for managing the end-to-end machine learning (ML) \" \"lifecycle.\" , \"Apache Spark is an open-source, distributed computing system designed for big data \" \"processing and analytics.\" , ], } ) eval_dataset = mlflow . data . from_pandas ( df , predictions = \"outputs\" , targets = \"ground_truth\" ) # Start an mlflow Run to record the evaluation results to with mlflow . start_run ( run_name = \"evaluate_qa\" ): # Run automatic evaluation with a set of built-in metrics for question-answering models results = mlflow . evaluate ( data = eval_dataset , model_type = \"question-answering\" , ) print ( results . tables [ \"eval_results_table\" ])
可观察性(DOC)
mlflow跟踪为OpenAI,Langchain,Llamaindex,DSPY,Autogen等各种Genai库提供了LLM可观察性。要启用自动追踪,请在运行模型之前致电mlflow .xyz.autolog()。请参阅文档以获取自定义和手动仪器。
mlflow
from openai import OpenAI
# Enable tracing for OpenAI
mlflow .openai.autolog()
# Query OpenAI LLM normally
response = OpenAI().chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hi!"}],
temperature=0.1,
)\”>
import mlflow from openai import OpenAI # Enable tracing for OpenAI mlflow . openai . autolog () # Query OpenAI LLM normally response = OpenAI (). chat . completions . create ( model = \"gpt-4o-mini\" , messages = [{ \"role\" : \"user\" , \"content\" : \"Hi!\" }], temperature = 0.1 , )
然后导航到mlflow UI中的“跟踪”选项卡以查找Trace Records OpenAI查询。
社区
- 有关mlflow使用情况的帮助或疑问(例如“我该怎么做X?”)访问文档或堆栈溢出。
- 另外,您可以向我们的AI驱动聊天机器人提出问题。访问DOC网站,然后单击右下底部的“询问AI”按钮,以开始与机器人聊天。
- 要报告错误,提交文档问题或提交功能请求,请打开GitHub问题。
- 有关发布公告和其他讨论,请订阅我们的邮件列表( mlflow -users@googlegroups.com),或在Slack上加入我们。
贡献
我们很高兴欢迎对mlflow的贡献!我们还在寻求对mlflow路线图上的项目的贡献。请参阅我们的贡献指南,以了解有关为mlflow做出贡献的更多信息。
引用
如果您在研究中使用mlflow ,请使用GitHub存储库页面顶部的“引用此存储库”按钮引用它,该按钮将为您提供包括APA和Bibtex在内的引用格式。
核心成员
mlflow目前由以下核心成员维护,并从数百名极有才华的社区成员捐款。
- 本·威尔逊
- 科里·祖玛(Corey Zumar)
- 丹尼尔·洛
- 加布里埃尔·富
- Harutaka Kawamura
- Serena Ruan
- Weichen Xu
- 沃渡Yuki
- Tomu Hirata
