Programming Elastic MapReduce:
Although you don't need a large computing infrastructure to process massive amounts of data with Apache Hadoop, it can still be difficult to get started. This practical guide shows you how to quickly launch data analysis projects in the cloud by using Amazon Elastic MapReduce (EMR), the hosted...
Gespeichert in:
Beteilige Person: | |
---|---|
Weitere beteiligte Personen: | |
Format: | Elektronisch E-Book |
Sprache: | Englisch |
Veröffentlicht: |
Sebastopol, CA
O'Reilly Media
2014
|
Schlagwörter: | |
Links: | https://learning.oreilly.com/library/view/-/9781449364038/?ar |
Zusammenfassung: | Although you don't need a large computing infrastructure to process massive amounts of data with Apache Hadoop, it can still be difficult to get started. This practical guide shows you how to quickly launch data analysis projects in the cloud by using Amazon Elastic MapReduce (EMR), the hosted Hadoop framework in Amazon Web Services (AWS). Authors Kevin Schmidt and Christopher Phillips demonstrate best practices for using EMR and various AWS and Apache technologies by walking you through the construction of a sample MapReduce log analysis application. Using code samples and example configurations, you'll learn how to assemble the building blocks necessary to solve your biggest data analysis problems. Get an overview of the AWS and Apache software tools used in large-scale data analysis Go through the process of executing a Job Flow with a simple log analyzer Discover useful MapReduce patterns for filtering and analyzing data sets Use Apache Hive and Pig instead of Java to build a MapReduce Job Flow Learn the basics for using Amazon EMR to run machine learning algorithms Develop a project cost model for using Amazon EMR and other AWS tools. |
Beschreibung: | Online resource; title from title page (Safari, viewed January 30, 2014) |
Umfang: | 1 Online-Ressource (1 volume) illustrations |
ISBN: | 1449363628 9781449363628 9781449364045 1449364047 |
Internformat
MARC
LEADER | 00000cam a22000002 4500 | ||
---|---|---|---|
001 | ZDB-30-ORH-047586869 | ||
003 | DE-627-1 | ||
005 | 20240228115452.0 | ||
007 | cr uuu---uuuuu | ||
008 | 191023s2014 xx |||||o 00| ||eng c | ||
020 | |a 1449363628 |9 1-4493-6362-8 | ||
020 | |a 9781449363628 |9 978-1-4493-6362-8 | ||
020 | |a 9781449364045 |9 978-1-4493-6404-5 | ||
020 | |a 1449364047 |9 1-4493-6404-7 | ||
035 | |a (DE-627-1)047586869 | ||
035 | |a (DE-599)KEP047586869 | ||
035 | |a (ORHE)9781449364038 | ||
035 | |a (DE-627-1)047586869 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | |a 004 | |
100 | 1 | |a Schmidt, Kevin J. |e VerfasserIn |4 aut | |
245 | 1 | 0 | |a Programming Elastic MapReduce |c Kevin Schmidt and Christopher Phillips |
264 | 1 | |a Sebastopol, CA |b O'Reilly Media |c 2014 | |
300 | |a 1 Online-Ressource (1 volume) |b illustrations | ||
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Online resource; title from title page (Safari, viewed January 30, 2014) | ||
520 | |a Although you don't need a large computing infrastructure to process massive amounts of data with Apache Hadoop, it can still be difficult to get started. This practical guide shows you how to quickly launch data analysis projects in the cloud by using Amazon Elastic MapReduce (EMR), the hosted Hadoop framework in Amazon Web Services (AWS). Authors Kevin Schmidt and Christopher Phillips demonstrate best practices for using EMR and various AWS and Apache technologies by walking you through the construction of a sample MapReduce log analysis application. Using code samples and example configurations, you'll learn how to assemble the building blocks necessary to solve your biggest data analysis problems. Get an overview of the AWS and Apache software tools used in large-scale data analysis Go through the process of executing a Job Flow with a simple log analyzer Discover useful MapReduce patterns for filtering and analyzing data sets Use Apache Hive and Pig instead of Java to build a MapReduce Job Flow Learn the basics for using Amazon EMR to run machine learning algorithms Develop a project cost model for using Amazon EMR and other AWS tools. | ||
630 | 2 | 0 | |a Apache Hadoop |
650 | 0 | |a Electronic data processing |x Distributed processing | |
650 | 0 | |a Big data | |
650 | 0 | |a Web services | |
650 | 0 | |a Internet programming | |
650 | 4 | |a Apache Hadoop | |
650 | 4 | |a Apache Hadoop | |
650 | 4 | |a Traitement réparti | |
650 | 4 | |a Données volumineuses | |
650 | 4 | |a Services Web | |
650 | 4 | |a Programmation Internet | |
650 | 4 | |a Internet programming | |
650 | 4 | |a Big data | |
650 | 4 | |a Electronic data processing ; Distributed processing | |
650 | 4 | |a Internet programming | |
650 | 4 | |a Web services | |
700 | 1 | |a Phillips, Chris |d 1971- |e MitwirkendeR |4 ctb | |
966 | 4 | 0 | |l DE-91 |p ZDB-30-ORH |q TUM_PDA_ORH |u https://learning.oreilly.com/library/view/-/9781449364038/?ar |m X:ORHE |x Aggregator |z lizenzpflichtig |3 Volltext |
912 | |a ZDB-30-ORH | ||
912 | |a ZDB-30-ORH | ||
951 | |a BO | ||
912 | |a ZDB-30-ORH | ||
049 | |a DE-91 |
Datensatz im Suchindex
DE-BY-TUM_katkey | ZDB-30-ORH-047586869 |
---|---|
_version_ | 1821494877275291648 |
adam_text | |
any_adam_object | |
author | Schmidt, Kevin J. |
author2 | Phillips, Chris 1971- |
author2_role | ctb |
author2_variant | c p cp |
author_facet | Schmidt, Kevin J. Phillips, Chris 1971- |
author_role | aut |
author_sort | Schmidt, Kevin J. |
author_variant | k j s kj kjs |
building | Verbundindex |
bvnumber | localTUM |
collection | ZDB-30-ORH |
ctrlnum | (DE-627-1)047586869 (DE-599)KEP047586869 (ORHE)9781449364038 |
dewey-full | 004 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 004 - Computer science |
dewey-raw | 004 |
dewey-search | 004 |
dewey-sort | 14 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03066cam a22005772 4500</leader><controlfield tag="001">ZDB-30-ORH-047586869</controlfield><controlfield tag="003">DE-627-1</controlfield><controlfield tag="005">20240228115452.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">191023s2014 xx |||||o 00| ||eng c</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1449363628</subfield><subfield code="9">1-4493-6362-8</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781449363628</subfield><subfield code="9">978-1-4493-6362-8</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781449364045</subfield><subfield code="9">978-1-4493-6404-5</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1449364047</subfield><subfield code="9">1-4493-6404-7</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)047586869</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KEP047586869</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ORHE)9781449364038</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)047586869</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">004</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Schmidt, Kevin J.</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Programming Elastic MapReduce</subfield><subfield code="c">Kevin Schmidt and Christopher Phillips</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Sebastopol, CA</subfield><subfield code="b">O'Reilly Media</subfield><subfield code="c">2014</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (1 volume)</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Online resource; title from title page (Safari, viewed January 30, 2014)</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Although you don't need a large computing infrastructure to process massive amounts of data with Apache Hadoop, it can still be difficult to get started. This practical guide shows you how to quickly launch data analysis projects in the cloud by using Amazon Elastic MapReduce (EMR), the hosted Hadoop framework in Amazon Web Services (AWS). Authors Kevin Schmidt and Christopher Phillips demonstrate best practices for using EMR and various AWS and Apache technologies by walking you through the construction of a sample MapReduce log analysis application. Using code samples and example configurations, you'll learn how to assemble the building blocks necessary to solve your biggest data analysis problems. Get an overview of the AWS and Apache software tools used in large-scale data analysis Go through the process of executing a Job Flow with a simple log analyzer Discover useful MapReduce patterns for filtering and analyzing data sets Use Apache Hive and Pig instead of Java to build a MapReduce Job Flow Learn the basics for using Amazon EMR to run machine learning algorithms Develop a project cost model for using Amazon EMR and other AWS tools.</subfield></datafield><datafield tag="630" ind1="2" ind2="0"><subfield code="a">Apache Hadoop</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Electronic data processing</subfield><subfield code="x">Distributed processing</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Big data</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Web services</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Internet programming</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Apache Hadoop</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Apache Hadoop</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Traitement réparti</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Données volumineuses</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Services Web</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Programmation Internet</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Internet programming</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Big data</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Electronic data processing ; Distributed processing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Internet programming</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Web services</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Phillips, Chris</subfield><subfield code="d">1971-</subfield><subfield code="e">MitwirkendeR</subfield><subfield code="4">ctb</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-91</subfield><subfield code="p">ZDB-30-ORH</subfield><subfield code="q">TUM_PDA_ORH</subfield><subfield code="u">https://learning.oreilly.com/library/view/-/9781449364038/?ar</subfield><subfield code="m">X:ORHE</subfield><subfield code="x">Aggregator</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">BO</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91</subfield></datafield></record></collection> |
id | ZDB-30-ORH-047586869 |
illustrated | Illustrated |
indexdate | 2025-01-17T11:21:21Z |
institution | BVB |
isbn | 1449363628 9781449363628 9781449364045 1449364047 |
language | English |
open_access_boolean | |
owner | DE-91 DE-BY-TUM |
owner_facet | DE-91 DE-BY-TUM |
physical | 1 Online-Ressource (1 volume) illustrations |
psigel | ZDB-30-ORH TUM_PDA_ORH ZDB-30-ORH |
publishDate | 2014 |
publishDateSearch | 2014 |
publishDateSort | 2014 |
publisher | O'Reilly Media |
record_format | marc |
spelling | Schmidt, Kevin J. VerfasserIn aut Programming Elastic MapReduce Kevin Schmidt and Christopher Phillips Sebastopol, CA O'Reilly Media 2014 1 Online-Ressource (1 volume) illustrations Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online resource; title from title page (Safari, viewed January 30, 2014) Although you don't need a large computing infrastructure to process massive amounts of data with Apache Hadoop, it can still be difficult to get started. This practical guide shows you how to quickly launch data analysis projects in the cloud by using Amazon Elastic MapReduce (EMR), the hosted Hadoop framework in Amazon Web Services (AWS). Authors Kevin Schmidt and Christopher Phillips demonstrate best practices for using EMR and various AWS and Apache technologies by walking you through the construction of a sample MapReduce log analysis application. Using code samples and example configurations, you'll learn how to assemble the building blocks necessary to solve your biggest data analysis problems. Get an overview of the AWS and Apache software tools used in large-scale data analysis Go through the process of executing a Job Flow with a simple log analyzer Discover useful MapReduce patterns for filtering and analyzing data sets Use Apache Hive and Pig instead of Java to build a MapReduce Job Flow Learn the basics for using Amazon EMR to run machine learning algorithms Develop a project cost model for using Amazon EMR and other AWS tools. Apache Hadoop Electronic data processing Distributed processing Big data Web services Internet programming Traitement réparti Données volumineuses Services Web Programmation Internet Electronic data processing ; Distributed processing Phillips, Chris 1971- MitwirkendeR ctb |
spellingShingle | Schmidt, Kevin J. Programming Elastic MapReduce Apache Hadoop Electronic data processing Distributed processing Big data Web services Internet programming Traitement réparti Données volumineuses Services Web Programmation Internet Electronic data processing ; Distributed processing |
title | Programming Elastic MapReduce |
title_auth | Programming Elastic MapReduce |
title_exact_search | Programming Elastic MapReduce |
title_full | Programming Elastic MapReduce Kevin Schmidt and Christopher Phillips |
title_fullStr | Programming Elastic MapReduce Kevin Schmidt and Christopher Phillips |
title_full_unstemmed | Programming Elastic MapReduce Kevin Schmidt and Christopher Phillips |
title_short | Programming Elastic MapReduce |
title_sort | programming elastic mapreduce |
topic | Apache Hadoop Electronic data processing Distributed processing Big data Web services Internet programming Traitement réparti Données volumineuses Services Web Programmation Internet Electronic data processing ; Distributed processing |
topic_facet | Apache Hadoop Electronic data processing Distributed processing Big data Web services Internet programming Traitement réparti Données volumineuses Services Web Programmation Internet Electronic data processing ; Distributed processing |
work_keys_str_mv | AT schmidtkevinj programmingelasticmapreduce AT phillipschris programmingelasticmapreduce |