• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

gsingers/search_with_machine_learning_course: Public repository for the Search w ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

gsingers/search_with_machine_learning_course

开源软件地址(OpenSource Url):

https://github.com/gsingers/search_with_machine_learning_course

开源编程语言(OpenSource Language):

Python 93.4%

开源软件介绍(OpenSource Introduction):

Welcome to Search with Machine Learning

Search with Machine Learning is a four week class taught by Grant Ingersoll and Daniel Tunkelang and is focused on helping students quickly get up to speed on search best practices by first teaching the basics of search and then extending those basics with machine learning.

Students will learn indexing, querying, aggregations and text analysis, as well as how to use machine learning for ranking, content classification and query understanding.

The class is a hands-on project-driven course where students will work with real data and the Opensearch/Elasticsearch ecosystem along with libraries like FastText, XG Boost and OpenSearch Learning to Rank.

Class code layout (e.g. where the projects are)

For our class, we have four weekly projects. Each project is a standalone Python application that interacts with an OpenSearch server (and perhaps other services).

You will find these four projects in the directories below them organized in the following way:

  • Week 1:
    • week1 -- The unfinished template for the week's project, annotated with instructions.
  • Week 2:
    • week2 -- The unfinished template for the week's project, annotated with instructions.
  • Week 3 and 4: you get the picture

Our instructor annotated results for each project will be provided during the class. Please note, these represent our way of doing the assignment and may differ from your results, as there is often more than one way of doing things in search.

You will also find several supporting directories and files for Logstash, Docker and Gitpod.

Prerequisites

  1. For this class, you will need a Kaggle account and a Kaggle API token.
  2. No prior search knowledge is required, but you should be able to code in Python or Java (all examples are in Python)
  3. You will need a Gitpod account.

Working in Gitpod (Officially Supported)

NOTE: The Gitpod free tier comes with 50 hours of use per month. We expect our work will be done in less time than that. However, you may wish to conserve time on the platform by being sure to stop your workspace when you are done with it. Gitpod will time you out (don't worry, your work will be saved), but that may take longer to detect.

The following things must be done each time you create a new Gitpod Workspace (unfortunately, we can't automate this)

  1. Fork this repository.

  2. Launch a new Gitpod workspace based on this repository. This will automatically start OpenSearch and OpenSearch Dashboards.

    1. Note: it can take a few minutes for OpenSearch and the dashboards to launch.
  3. You should now have a running Opensearch instance (port 9200) and a running Opensearch Dashboards instance (port 5601)

  4. Login to the dashboards at https://5601-<$GITPOD_URL>/ with default username admin and password admin. Change your password to something you will remember, as these are public instances. This should popup automatically as a new tab, unless you have blocked popups.

     $GITPOD_URL is a placeholder for your ephemeral Gitpod host name, e.g. silver-grasshopper-8czadqyn.ws-us25.gitpod.io     
    

Downloading the Best Buy Dataset

  1. Run the install Kaggle API token script and follow the instructions:

     ./install-kaggle-token.sh
    
  2. Accept all of the kaggle competition rules then run the download data script:

     ./download-data.sh
    

Exploring the OpenSearch Sample Dashboards and Data

  1. Login to OpenSearch and point your browser at https://5601-<$GITPOD_URL>/app/opensearch_dashboards_overview#/
  2. Click the "Add sample data" link
  3. Click the "Add Data" link for any of the 3 projects listed. In the class, we chose the "Sample flight data", but any of the three are fine for exploration.

Running the Weekly Project

At the command line, do the following steps to run the example.

  1. Activate your Python Virtual Environment. We use pyenv (Pyenv website)[https://github.com/pyenv/pyenv] and pyenv-virtualenv (Pyenv Virtualenv)[https://github.com/pyenv/pyenv-virtualenv].
    1. pyenv activate search_with_ml -- Activate the Virtualenv.
  2. Run Flask:
    1. export FLASK_ENV=development
    2. IMPORTANT Set the Flask App Environment Variable: export FLASK_APP=week2
    3. For week2, you may also choose to set export PRIOR_CLICKS_LOC=/workspace/ltr_output/train.csv after running the LTR end-to-end script.
    4. flask run --port 3000 (The default port of 5000 is already in use)
    5. Open the Flask APP at https://3000-<$GITPOD_URL>/
  3. Or run ipython

Working locally (Not supported, but may work for you. YMMV)

To run locally, you will need a few things:

  1. Pyenv and Pyenv-Virtualenv with Python 3.9.7 installed
  2. Docker
  3. A Git client

Note: these have only been tested on a Mac running OS 12.2.1. YMMV. Much of what you will need to do will be similar to what's in .gitpod.Dockerfile

  1. Install GraphViz
  2. pyenv install 3.9.7
  3. pip install all of the libraries you see in .gitpod.Dockerfile
  4. Setup your weekly python environments per the "Weekly Project" above.
  5. Install Fasttext
  6. Run OpenSearch:
    1. cd docker
    2. docker-compose up
  7. Note: most of the scripts and projects assume the data is in /workspace/datasets, but have overrides to specify your own directories. You will need to download and plan accordingly.
  8. Do your work per the Weekly Project



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap