• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

data-science-on-aws/data-science-on-aws: AI and Machine Learning with Kubeflow, ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

data-science-on-aws/data-science-on-aws

开源软件地址(OpenSource Url):

https://github.com/data-science-on-aws/data-science-on-aws

开源编程语言(OpenSource Language):

Jupyter Notebook 86.8%

开源软件介绍(OpenSource Introduction):

Data Science on AWS Workshop

Based on this O'Reilly book:

Data Science on AWS

Workshop Description

In this hands-on workshop, we will build an end-to-end AI/ML pipeline for natural language processing with Amazon SageMaker. We will train and tune a text classifier to classify text-based product reviews using the state-of-the-art BERT model for language representation.

To build our BERT-based NLP model, we use the Amazon Customer Reviews Dataset which contains 150+ million customer reviews from Amazon.com for the 20 year period between 1995 and 2015. In particular, we train a classifier to predict the star_rating (1 is bad, 5 is good) from the review_body (free-form review text).

You will get hands-on with advanced model training and deployment techniques such as hyper-parameter tuning, A/B testing, and auto-scaling. You will also setup a real-time, streaming analytics and data science pipeline to perform window-based aggregations and anomaly detection.

Attendees will learn how to do the following:

  • Ingest data into S3 using Amazon Athena and the Parquet data format
  • Visualize data with pandas, matplotlib on SageMaker notebooks
  • Perform feature engineering on a raw dataset using Scikit-Learn and SageMaker Processing Jobs
  • Store and share features using SageMaker Feature Store
  • Train and evaluate a custom BERT model using TensorFlow, Keras, and SageMaker Training Jobs
  • Evaluate the model using SageMaker Processing Jobs
  • Track model artifacts using Amazon SageMaker ML Lineage Tracking
  • Register and version models using SageMaker Model Registry
  • Deploy a model to a REST endpoint using SageMaker Hosting and SageMaker Endpoints
  • Automate ML workflow steps by building end-to-end model pipelines
  • Perform automated machine learning (AutoML) to find the best model from just your dataset with low-code
  • Find the best hyper-parameters for your custom model using SageMaker Hyper-parameter Tuning Jobs
  • Deploy multiple model variants into a live, production A/B test to compare online performance, live-shift prediction traffic, and autoscale the winning variant using SageMaker Hosting and SageMaker Endpoints
  • Setup a streaming analytics and continuous machine learning application using Amazon Kinesis and SageMaker

Workshop Instructions

Note: This workshop will create an ephemeral AWS acccount for each attendee. This ephemeral account is not accessible after the workshop. You can, of course, clone this GitHub repo and reproduce the entire workshop in your own AWS Account.

0. Logout of All AWS Consoles Across All Browser Tabs

If you do not logout of existing AWS Consoles, things will not work properly.

AWS Account Logout

Please logout of all AWS Console sessions in all browser tabs.

1. Login to the Workshop Portal (aka Event Engine).

Event Engine Terms and Conditions

Event Engine Login

Event Engine Dashboard

2. Login to the AWS Console

Event Engine AWS Console

Take the defaults and click on Open AWS Console. This will open AWS Console in a new browser tab.

If you see this message, you need to logout from any previously used AWS accounts.

AWS Account Logout

Please logout of all AWS Console sessions in all browser tabs.

Double-check that your account name is similar to TeamRole/MasterKey as follows:

IAM Role

If not, please logout of your AWS Console in all browser tabs and re-run the steps above!

3. Launch SageMaker Studio

Open the AWS Management Console

Search Box SageMaker

In the AWS Console search bar, type SageMaker and select Amazon SageMaker to open the service console.

SageMaker Studio

Open SageMaker Studio

Loading Studio

4. Launch a New Terminal within Studio

Click File > New > Terminal to launch a terminal in your Jupyter instance.

Terminal Studio

5. Clone this GitHub Repo in the Terminal

Within the Terminal, run the following:

cd ~ && git clone -b quickstart https://github.com/data-science-on-aws/data-science-on-aws

If you see an error like the following, just re-run the command again until it works:

fatal: Unable to create '.git/index.lock': File exists.

Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.

Note: This is not a fatal error ^^ above ^^. Just re-run the command again until it works.

6. Start the Workshop!

Navigate to the first directory and start the workshop!

You may need to refresh your browser if you don't see the directories.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap