Gespeichert in:
Beteiligte Personen: | , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | Englisch |
Veröffentlicht: |
Sebastopol, CA
O'Reilly Media
[2019]
|
Ausgabe: | Second edition. |
Schlagwörter: | |
Links: | https://learning.oreilly.com/library/view/-/9781098115487/?ar |
Zusammenfassung: | Innovation and competition are driving analysts and data scientists toward increasingly complex predictive modeling and machine learning algorithms. This complexity makes these models accurate, but can also make their predictions difficult to understand. When accuracy outpaces interpretability, human trust suffers, affecting business adoption, model validation efforts, and regulatory oversight. In the updated edition of this ebook, Patrick Hall and Navdeep Gill from H2O.ai introduce the idea of machine learning interpretability and examine a set of machine learning techniques, algorithms, and models to help data scientists improve the accuracy of their predictive models while maintaining a high degree of interpretability. While some industries require model transparency, such as banking, insurance, and healthcare, machine learning practitioners in almost any vertical will likely benefit from incorporating the discussed interpretable models, and debugging, explanation, and fairness approaches into their workflow. This second edition discusses new, exact model explanation techniques, and de-emphasizes the trade-off between accuracy and interpretability. This edition also includes up-to-date information on cutting-edge interpretability techniques and new figures to illustrate the concepts of trust and understanding in machine learning models. Learn how machine learning and predictive modeling are applied in practice Understand social and commercial motivations for machine learning interpretability, fairness, accountability, and transparency Get a definition of interpretability and learn about the groups leading interpretability research Examine a taxonomy for classifying and describing interpretable machine learning approaches Gain familiarity with new and more traditional interpretable modeling approaches See numerous techniques for understanding and explaining models and predictions Read about methods to debug prediction errors, sociological bias, and security vulnerabilities in predictive models Get a feel for the techniques in action with code examples. |
Beschreibung: | Includes bibliographical references. - Online resource; title from title page (viewed November 12, 2019) |
Umfang: | 1 online resource (1 volume) illustrations |
Internformat
MARC
LEADER | 00000cam a22000002c 4500 | ||
---|---|---|---|
001 | ZDB-30-ORH-048518344 | ||
003 | DE-627-1 | ||
005 | 20240228120915.0 | ||
007 | cr uuu---uuuuu | ||
008 | 191206s2019 xx |||||o 00| ||eng c | ||
035 | |a (DE-627-1)048518344 | ||
035 | |a (DE-599)KEP048518344 | ||
035 | |a (ORHE)9781098115487 | ||
035 | |a (DE-627-1)048518344 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
100 | 1 | |a Hall, Patrick |e VerfasserIn |4 aut | |
245 | 1 | 0 | |a An introduction to machine learning interpretability |b an applied perspective on fairness, accountability, transparency, and explainable AI |c Patrick Hall and Navdeep Gill |
246 | 3 | 3 | |a Applied perspective on fairness, accountability, transparency, and explainable AI |
250 | |a Second edition. | ||
264 | 1 | |a Sebastopol, CA |b O'Reilly Media |c [2019] | |
264 | 4 | |c ©2019 | |
300 | |a 1 online resource (1 volume) |b illustrations | ||
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Includes bibliographical references. - Online resource; title from title page (viewed November 12, 2019) | ||
520 | |a Innovation and competition are driving analysts and data scientists toward increasingly complex predictive modeling and machine learning algorithms. This complexity makes these models accurate, but can also make their predictions difficult to understand. When accuracy outpaces interpretability, human trust suffers, affecting business adoption, model validation efforts, and regulatory oversight. In the updated edition of this ebook, Patrick Hall and Navdeep Gill from H2O.ai introduce the idea of machine learning interpretability and examine a set of machine learning techniques, algorithms, and models to help data scientists improve the accuracy of their predictive models while maintaining a high degree of interpretability. While some industries require model transparency, such as banking, insurance, and healthcare, machine learning practitioners in almost any vertical will likely benefit from incorporating the discussed interpretable models, and debugging, explanation, and fairness approaches into their workflow. This second edition discusses new, exact model explanation techniques, and de-emphasizes the trade-off between accuracy and interpretability. This edition also includes up-to-date information on cutting-edge interpretability techniques and new figures to illustrate the concepts of trust and understanding in machine learning models. Learn how machine learning and predictive modeling are applied in practice Understand social and commercial motivations for machine learning interpretability, fairness, accountability, and transparency Get a definition of interpretability and learn about the groups leading interpretability research Examine a taxonomy for classifying and describing interpretable machine learning approaches Gain familiarity with new and more traditional interpretable modeling approaches See numerous techniques for understanding and explaining models and predictions Read about methods to debug prediction errors, sociological bias, and security vulnerabilities in predictive models Get a feel for the techniques in action with code examples. | ||
650 | 0 | |a Machine learning | |
650 | 0 | |a Artificial intelligence | |
650 | 2 | |a Artificial Intelligence | |
650 | 2 | |a Machine Learning | |
650 | 4 | |a Apprentissage automatique | |
650 | 4 | |a Intelligence artificielle | |
650 | 4 | |a artificial intelligence | |
650 | 4 | |a Artificial intelligence | |
650 | 4 | |a Machine learning | |
700 | 1 | |a Gill, Navdeep |e VerfasserIn |4 aut | |
966 | 4 | 0 | |l DE-91 |p ZDB-30-ORH |q TUM_PDA_ORH |u https://learning.oreilly.com/library/view/-/9781098115487/?ar |m X:ORHE |x Aggregator |z lizenzpflichtig |3 Volltext |
912 | |a ZDB-30-ORH | ||
912 | |a ZDB-30-ORH | ||
951 | |a BO | ||
912 | |a ZDB-30-ORH | ||
049 | |a DE-91 |
Datensatz im Suchindex
DE-BY-TUM_katkey | ZDB-30-ORH-048518344 |
---|---|
_version_ | 1833357060556718080 |
adam_text | |
any_adam_object | |
author | Hall, Patrick Gill, Navdeep |
author_facet | Hall, Patrick Gill, Navdeep |
author_role | aut aut |
author_sort | Hall, Patrick |
author_variant | p h ph n g ng |
building | Verbundindex |
bvnumber | localTUM |
collection | ZDB-30-ORH |
ctrlnum | (DE-627-1)048518344 (DE-599)KEP048518344 (ORHE)9781098115487 |
edition | Second edition. |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03824cam a22004692c 4500</leader><controlfield tag="001">ZDB-30-ORH-048518344</controlfield><controlfield tag="003">DE-627-1</controlfield><controlfield tag="005">20240228120915.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">191206s2019 xx |||||o 00| ||eng c</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)048518344</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KEP048518344</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ORHE)9781098115487</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)048518344</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hall, Patrick</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">An introduction to machine learning interpretability</subfield><subfield code="b">an applied perspective on fairness, accountability, transparency, and explainable AI</subfield><subfield code="c">Patrick Hall and Navdeep Gill</subfield></datafield><datafield tag="246" ind1="3" ind2="3"><subfield code="a">Applied perspective on fairness, accountability, transparency, and explainable AI</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Second edition.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Sebastopol, CA</subfield><subfield code="b">O'Reilly Media</subfield><subfield code="c">[2019]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2019</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (1 volume)</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references. - Online resource; title from title page (viewed November 12, 2019)</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Innovation and competition are driving analysts and data scientists toward increasingly complex predictive modeling and machine learning algorithms. This complexity makes these models accurate, but can also make their predictions difficult to understand. When accuracy outpaces interpretability, human trust suffers, affecting business adoption, model validation efforts, and regulatory oversight. In the updated edition of this ebook, Patrick Hall and Navdeep Gill from H2O.ai introduce the idea of machine learning interpretability and examine a set of machine learning techniques, algorithms, and models to help data scientists improve the accuracy of their predictive models while maintaining a high degree of interpretability. While some industries require model transparency, such as banking, insurance, and healthcare, machine learning practitioners in almost any vertical will likely benefit from incorporating the discussed interpretable models, and debugging, explanation, and fairness approaches into their workflow. This second edition discusses new, exact model explanation techniques, and de-emphasizes the trade-off between accuracy and interpretability. This edition also includes up-to-date information on cutting-edge interpretability techniques and new figures to illustrate the concepts of trust and understanding in machine learning models. Learn how machine learning and predictive modeling are applied in practice Understand social and commercial motivations for machine learning interpretability, fairness, accountability, and transparency Get a definition of interpretability and learn about the groups leading interpretability research Examine a taxonomy for classifying and describing interpretable machine learning approaches Gain familiarity with new and more traditional interpretable modeling approaches See numerous techniques for understanding and explaining models and predictions Read about methods to debug prediction errors, sociological bias, and security vulnerabilities in predictive models Get a feel for the techniques in action with code examples.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Machine learning</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="2"><subfield code="a">Artificial Intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="2"><subfield code="a">Machine Learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Apprentissage automatique</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Intelligence artificielle</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gill, Navdeep</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-91</subfield><subfield code="p">ZDB-30-ORH</subfield><subfield code="q">TUM_PDA_ORH</subfield><subfield code="u">https://learning.oreilly.com/library/view/-/9781098115487/?ar</subfield><subfield code="m">X:ORHE</subfield><subfield code="x">Aggregator</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">BO</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91</subfield></datafield></record></collection> |
id | ZDB-30-ORH-048518344 |
illustrated | Illustrated |
indexdate | 2025-05-28T09:45:41Z |
institution | BVB |
language | English |
open_access_boolean | |
owner | DE-91 DE-BY-TUM |
owner_facet | DE-91 DE-BY-TUM |
physical | 1 online resource (1 volume) illustrations |
psigel | ZDB-30-ORH TUM_PDA_ORH ZDB-30-ORH |
publishDate | 2019 |
publishDateSearch | 2019 |
publishDateSort | 2019 |
publisher | O'Reilly Media |
record_format | marc |
spelling | Hall, Patrick VerfasserIn aut An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI Patrick Hall and Navdeep Gill Applied perspective on fairness, accountability, transparency, and explainable AI Second edition. Sebastopol, CA O'Reilly Media [2019] ©2019 1 online resource (1 volume) illustrations Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Includes bibliographical references. - Online resource; title from title page (viewed November 12, 2019) Innovation and competition are driving analysts and data scientists toward increasingly complex predictive modeling and machine learning algorithms. This complexity makes these models accurate, but can also make their predictions difficult to understand. When accuracy outpaces interpretability, human trust suffers, affecting business adoption, model validation efforts, and regulatory oversight. In the updated edition of this ebook, Patrick Hall and Navdeep Gill from H2O.ai introduce the idea of machine learning interpretability and examine a set of machine learning techniques, algorithms, and models to help data scientists improve the accuracy of their predictive models while maintaining a high degree of interpretability. While some industries require model transparency, such as banking, insurance, and healthcare, machine learning practitioners in almost any vertical will likely benefit from incorporating the discussed interpretable models, and debugging, explanation, and fairness approaches into their workflow. This second edition discusses new, exact model explanation techniques, and de-emphasizes the trade-off between accuracy and interpretability. This edition also includes up-to-date information on cutting-edge interpretability techniques and new figures to illustrate the concepts of trust and understanding in machine learning models. Learn how machine learning and predictive modeling are applied in practice Understand social and commercial motivations for machine learning interpretability, fairness, accountability, and transparency Get a definition of interpretability and learn about the groups leading interpretability research Examine a taxonomy for classifying and describing interpretable machine learning approaches Gain familiarity with new and more traditional interpretable modeling approaches See numerous techniques for understanding and explaining models and predictions Read about methods to debug prediction errors, sociological bias, and security vulnerabilities in predictive models Get a feel for the techniques in action with code examples. Machine learning Artificial intelligence Artificial Intelligence Machine Learning Apprentissage automatique Intelligence artificielle artificial intelligence Gill, Navdeep VerfasserIn aut |
spellingShingle | Hall, Patrick Gill, Navdeep An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI Machine learning Artificial intelligence Artificial Intelligence Machine Learning Apprentissage automatique Intelligence artificielle artificial intelligence |
title | An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI |
title_alt | Applied perspective on fairness, accountability, transparency, and explainable AI |
title_auth | An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI |
title_exact_search | An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI |
title_full | An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI Patrick Hall and Navdeep Gill |
title_fullStr | An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI Patrick Hall and Navdeep Gill |
title_full_unstemmed | An introduction to machine learning interpretability an applied perspective on fairness, accountability, transparency, and explainable AI Patrick Hall and Navdeep Gill |
title_short | An introduction to machine learning interpretability |
title_sort | an introduction to machine learning interpretability an applied perspective on fairness accountability transparency and explainable ai |
title_sub | an applied perspective on fairness, accountability, transparency, and explainable AI |
topic | Machine learning Artificial intelligence Artificial Intelligence Machine Learning Apprentissage automatique Intelligence artificielle artificial intelligence |
topic_facet | Machine learning Artificial intelligence Artificial Intelligence Machine Learning Apprentissage automatique Intelligence artificielle artificial intelligence |
work_keys_str_mv | AT hallpatrick anintroductiontomachinelearninginterpretabilityanappliedperspectiveonfairnessaccountabilitytransparencyandexplainableai AT gillnavdeep anintroductiontomachinelearninginterpretabilityanappliedperspectiveonfairnessaccountabilitytransparencyandexplainableai AT hallpatrick appliedperspectiveonfairnessaccountabilitytransparencyandexplainableai AT gillnavdeep appliedperspectiveonfairnessaccountabilitytransparencyandexplainableai |