在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):h2oai/mli-resources开源软件地址(OpenSource Url):https://github.com/h2oai/mli-resources开源编程语言(OpenSource Language):Jupyter Notebook 99.5%开源软件介绍(OpenSource Introduction):Machine Learning Interpretability (MLI)Machine learning algorithms create potentially more accurate models than linear models, but any increase in accuracy over more traditional, better-understood, and more easily explainable techniques is not practical for those who must explain their models to regulators or customers. For many decades, the models created by machine learning algorithms were generally taken to be black-boxes. However, a recent flurry of research has introduced credible techniques for interpreting complex, machine-learned models. Materials presented here illustrate applications or adaptations of these techniques for practicing data scientists. Want to contribute your own content? Just make a pull request. Want to use the content in this repo? Just cite the H2O.ai machine learning interpretability team or the original author(s) as appropriate. Contents
Practical MLI examples(A Dockerfile is provided that will construct a container with all necessary dependencies to run the examples here.)
Installation of ExamplesDockerfileA Dockerfile is provided to build a docker container with all necessary packages and dependencies. This is the easiest way to use these examples if you are on Mac OS X, *nix, or Windows 10. To do so:
ManualInstall:
Anaconda Python, Java, Git, and GraphViz must be added to your system path. From a terminal:
Additional Code ExamplesThe notebooks in this repo have been revamped and refined many times. Other versions with different, and potentially interesting, details are available at these locations: Testing ExplanationsOne way to test generated explanations for accuracy is with simulated data with known characteristics. For instance, models trained on totally random data with no relationship between a number of input variables and a prediction target should not give strong weight to any input variable nor generate compelling local explanations or reason codes. Conversely, you can use simulated data with a known signal generating function to test that explanations accurately represent that known function. Detailed examples of testing explanations with simulated data are available here. A summary of these results are available here. Webinars/Videos
Booklets
Conference Presentations
Miscellaneous Resources
General References
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论