AI Fairness:
Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that's not the case. In this report, business,...
Gespeichert in:
Beteiligte Personen: | , , |
---|---|
Körperschaft: | |
Format: | Elektronisch E-Book |
Sprache: | Englisch |
Veröffentlicht: |
[Erscheinungsort nicht ermittelbar]
O'Reilly Media, Inc.
2020
|
Ausgabe: | 1st edition. |
Links: | https://learning.oreilly.com/library/view/-/9781492077664/?ar |
Zusammenfassung: | Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that's not the case. In this report, business, analytics, and data science leaders will examine the challenges of defining fairness and reducing unfair bias throughout the machine learning pipeline. Trisha Mahoney, Kush R. Varshney, and Michael Hind from IBM explain why you need to engage early and authoritatively when building AI you can trust. You'll learn how your organization should approach fairness and bias, including trade-offs you need to make between model accuracy and model bias. This report also introduces you to AI Fairness 360, an extensible open source toolkit for measuring, understanding, and reducing AI bias. In this report, you'll explore: Legal, ethical, and trust factors you need to consider when defining fairness for your use case Different ways to measure and remove unfair bias, using the most relevant metrics for the particular use case How to define acceptable thresholds for model accuracy and unfair model bias. |
Beschreibung: | Online resource; Title from title page (viewed April 25, 2020) |
Umfang: | 1 Online-Ressource (34 Seiten) |
Internformat
MARC
LEADER | 00000cam a22000002 4500 | ||
---|---|---|---|
001 | ZDB-30-ORH-053527216 | ||
003 | DE-627-1 | ||
005 | 20240228121030.0 | ||
007 | cr uuu---uuuuu | ||
008 | 200625s2020 xx |||||o 00| ||eng c | ||
035 | |a (DE-627-1)053527216 | ||
035 | |a (DE-599)KEP053527216 | ||
035 | |a (ORHE)9781492077664 | ||
035 | |a (DE-627-1)053527216 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
100 | 1 | |a Mahoney, Trisha |e VerfasserIn |4 aut | |
245 | 1 | 0 | |a AI Fairness |c Mahoney, Trisha |
250 | |a 1st edition. | ||
264 | 1 | |a [Erscheinungsort nicht ermittelbar] |b O'Reilly Media, Inc. |c 2020 | |
300 | |a 1 Online-Ressource (34 Seiten) | ||
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Online resource; Title from title page (viewed April 25, 2020) | ||
520 | |a Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that's not the case. In this report, business, analytics, and data science leaders will examine the challenges of defining fairness and reducing unfair bias throughout the machine learning pipeline. Trisha Mahoney, Kush R. Varshney, and Michael Hind from IBM explain why you need to engage early and authoritatively when building AI you can trust. You'll learn how your organization should approach fairness and bias, including trade-offs you need to make between model accuracy and model bias. This report also introduces you to AI Fairness 360, an extensible open source toolkit for measuring, understanding, and reducing AI bias. In this report, you'll explore: Legal, ethical, and trust factors you need to consider when defining fairness for your use case Different ways to measure and remove unfair bias, using the most relevant metrics for the particular use case How to define acceptable thresholds for model accuracy and unfair model bias. | ||
700 | 1 | |a Varshney, Kush |e VerfasserIn |4 aut | |
700 | 1 | |a Hind, Michael |e VerfasserIn |4 aut | |
710 | 2 | |a Safari, an O'Reilly Media Company. |e MitwirkendeR |4 ctb | |
966 | 4 | 0 | |l DE-91 |p ZDB-30-ORH |q TUM_PDA_ORH |u https://learning.oreilly.com/library/view/-/9781492077664/?ar |m X:ORHE |x Aggregator |z lizenzpflichtig |3 Volltext |
912 | |a ZDB-30-ORH | ||
912 | |a ZDB-30-ORH | ||
951 | |a BO | ||
912 | |a ZDB-30-ORH | ||
049 | |a DE-91 |
Datensatz im Suchindex
DE-BY-TUM_katkey | ZDB-30-ORH-053527216 |
---|---|
_version_ | 1821494840549965825 |
adam_text | |
any_adam_object | |
author | Mahoney, Trisha Varshney, Kush Hind, Michael |
author_corporate | Safari, an O'Reilly Media Company |
author_corporate_role | ctb |
author_facet | Mahoney, Trisha Varshney, Kush Hind, Michael Safari, an O'Reilly Media Company |
author_role | aut aut aut |
author_sort | Mahoney, Trisha |
author_variant | t m tm k v kv m h mh |
building | Verbundindex |
bvnumber | localTUM |
collection | ZDB-30-ORH |
ctrlnum | (DE-627-1)053527216 (DE-599)KEP053527216 (ORHE)9781492077664 |
edition | 1st edition. |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02413cam a22003612 4500</leader><controlfield tag="001">ZDB-30-ORH-053527216</controlfield><controlfield tag="003">DE-627-1</controlfield><controlfield tag="005">20240228121030.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">200625s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)053527216</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KEP053527216</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ORHE)9781492077664</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)053527216</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Mahoney, Trisha</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">AI Fairness</subfield><subfield code="c">Mahoney, Trisha</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st edition.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">[Erscheinungsort nicht ermittelbar]</subfield><subfield code="b">O'Reilly Media, Inc.</subfield><subfield code="c">2020</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (34 Seiten)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Online resource; Title from title page (viewed April 25, 2020)</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that's not the case. In this report, business, analytics, and data science leaders will examine the challenges of defining fairness and reducing unfair bias throughout the machine learning pipeline. Trisha Mahoney, Kush R. Varshney, and Michael Hind from IBM explain why you need to engage early and authoritatively when building AI you can trust. You'll learn how your organization should approach fairness and bias, including trade-offs you need to make between model accuracy and model bias. This report also introduces you to AI Fairness 360, an extensible open source toolkit for measuring, understanding, and reducing AI bias. In this report, you'll explore: Legal, ethical, and trust factors you need to consider when defining fairness for your use case Different ways to measure and remove unfair bias, using the most relevant metrics for the particular use case How to define acceptable thresholds for model accuracy and unfair model bias.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Varshney, Kush</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hind, Michael</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="710" ind1="2" ind2=" "><subfield code="a">Safari, an O'Reilly Media Company.</subfield><subfield code="e">MitwirkendeR</subfield><subfield code="4">ctb</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-91</subfield><subfield code="p">ZDB-30-ORH</subfield><subfield code="q">TUM_PDA_ORH</subfield><subfield code="u">https://learning.oreilly.com/library/view/-/9781492077664/?ar</subfield><subfield code="m">X:ORHE</subfield><subfield code="x">Aggregator</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">BO</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91</subfield></datafield></record></collection> |
id | ZDB-30-ORH-053527216 |
illustrated | Not Illustrated |
indexdate | 2025-01-17T11:20:46Z |
institution | BVB |
language | English |
open_access_boolean | |
owner | DE-91 DE-BY-TUM |
owner_facet | DE-91 DE-BY-TUM |
physical | 1 Online-Ressource (34 Seiten) |
psigel | ZDB-30-ORH TUM_PDA_ORH ZDB-30-ORH |
publishDate | 2020 |
publishDateSearch | 2020 |
publishDateSort | 2020 |
publisher | O'Reilly Media, Inc. |
record_format | marc |
spelling | Mahoney, Trisha VerfasserIn aut AI Fairness Mahoney, Trisha 1st edition. [Erscheinungsort nicht ermittelbar] O'Reilly Media, Inc. 2020 1 Online-Ressource (34 Seiten) Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online resource; Title from title page (viewed April 25, 2020) Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that's not the case. In this report, business, analytics, and data science leaders will examine the challenges of defining fairness and reducing unfair bias throughout the machine learning pipeline. Trisha Mahoney, Kush R. Varshney, and Michael Hind from IBM explain why you need to engage early and authoritatively when building AI you can trust. You'll learn how your organization should approach fairness and bias, including trade-offs you need to make between model accuracy and model bias. This report also introduces you to AI Fairness 360, an extensible open source toolkit for measuring, understanding, and reducing AI bias. In this report, you'll explore: Legal, ethical, and trust factors you need to consider when defining fairness for your use case Different ways to measure and remove unfair bias, using the most relevant metrics for the particular use case How to define acceptable thresholds for model accuracy and unfair model bias. Varshney, Kush VerfasserIn aut Hind, Michael VerfasserIn aut Safari, an O'Reilly Media Company. MitwirkendeR ctb |
spellingShingle | Mahoney, Trisha Varshney, Kush Hind, Michael AI Fairness |
title | AI Fairness |
title_auth | AI Fairness |
title_exact_search | AI Fairness |
title_full | AI Fairness Mahoney, Trisha |
title_fullStr | AI Fairness Mahoney, Trisha |
title_full_unstemmed | AI Fairness Mahoney, Trisha |
title_short | AI Fairness |
title_sort | ai fairness |
work_keys_str_mv | AT mahoneytrisha aifairness AT varshneykush aifairness AT hindmichael aifairness AT safarianoreillymediacompany aifairness |