Prompt engineering for generative AI: future-proof inputs for reliable AI outputs at scale
Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly...
Gespeichert in:
Beteiligte Personen: | , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | Englisch |
Veröffentlicht: |
Sebastopol, CA
O'Reilly Media, Inc.
2024
|
Ausgabe: | First edition. |
Schlagwörter: | |
Links: | https://learning.oreilly.com/library/view/-/9781098153427/?ar |
Zusammenfassung: | Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly reduced today, practically any developer can harness LLMs and diffusion models to tackle problems previously unsuitable for automation. With this book, you'll gain a solid foundation in generative AI, including how to apply these models in practice. When first integrating LLMs and diffusion models into their workflows, most developers struggle to coax reliable enough results from them to use in automated systems. Authors James Phoenix and Mike Taylor show you how a set of principles called prompt engineering can enable you to work effectively with AI. |
Umfang: | 1 Online-Ressource (396 Seiten) illustrations |
Internformat
MARC
LEADER | 00000cam a22000002 4500 | ||
---|---|---|---|
001 | ZDB-30-ORH-100860095 | ||
003 | DE-627-1 | ||
005 | 20240603113704.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240227s2024 xx |||||o 00| ||eng c | ||
035 | |a (DE-627-1)100860095 | ||
035 | |a (DE-599)KEP100860095 | ||
035 | |a (ORHE)9781098153427 | ||
035 | |a (DE-627-1)100860095 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | |a 006.3/5 |2 23/eng/20240206 | |
100 | 1 | |a Phoenix, James |e VerfasserIn |4 aut | |
245 | 1 | 0 | |a Prompt engineering for generative AI |b future-proof inputs for reliable AI outputs at scale |c James Phoenix and Mike Taylor |
250 | |a First edition. | ||
264 | 1 | |a Sebastopol, CA |b O'Reilly Media, Inc. |c 2024 | |
300 | |a 1 Online-Ressource (396 Seiten) |b illustrations | ||
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly reduced today, practically any developer can harness LLMs and diffusion models to tackle problems previously unsuitable for automation. With this book, you'll gain a solid foundation in generative AI, including how to apply these models in practice. When first integrating LLMs and diffusion models into their workflows, most developers struggle to coax reliable enough results from them to use in automated systems. Authors James Phoenix and Mike Taylor show you how a set of principles called prompt engineering can enable you to work effectively with AI. | ||
650 | 0 | |a Natural language generation (Computer science) | |
650 | 0 | |a Artificial intelligence |x Computer programs | |
650 | 0 | |a Natural language processing (Computer science) | |
650 | 4 | |a Génération automatique de texte | |
650 | 4 | |a Intelligence artificielle ; Logiciels | |
650 | 4 | |a Traitement automatique des langues naturelles | |
700 | 1 | |a Taylor, Mike |e VerfasserIn |4 aut | |
966 | 4 | 0 | |l DE-91 |p ZDB-30-ORH |q TUM_PDA_ORH |u https://learning.oreilly.com/library/view/-/9781098153427/?ar |m X:ORHE |x Aggregator |z lizenzpflichtig |3 Volltext |
912 | |a ZDB-30-ORH | ||
951 | |a BO | ||
912 | |a ZDB-30-ORH | ||
049 | |a DE-91 |
Datensatz im Suchindex
DE-BY-TUM_katkey | ZDB-30-ORH-100860095 |
---|---|
_version_ | 1821494934582067200 |
adam_text | |
any_adam_object | |
author | Phoenix, James Taylor, Mike |
author_facet | Phoenix, James Taylor, Mike |
author_role | aut aut |
author_sort | Phoenix, James |
author_variant | j p jp m t mt |
building | Verbundindex |
bvnumber | localTUM |
collection | ZDB-30-ORH |
ctrlnum | (DE-627-1)100860095 (DE-599)KEP100860095 (ORHE)9781098153427 |
dewey-full | 006.3/5 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3/5 |
dewey-search | 006.3/5 |
dewey-sort | 16.3 15 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
edition | First edition. |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02331cam a22003972 4500</leader><controlfield tag="001">ZDB-30-ORH-100860095</controlfield><controlfield tag="003">DE-627-1</controlfield><controlfield tag="005">20240603113704.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240227s2024 xx |||||o 00| ||eng c</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)100860095</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KEP100860095</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ORHE)9781098153427</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)100860095</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.3/5</subfield><subfield code="2">23/eng/20240206</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Phoenix, James</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Prompt engineering for generative AI</subfield><subfield code="b">future-proof inputs for reliable AI outputs at scale</subfield><subfield code="c">James Phoenix and Mike Taylor</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">First edition.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Sebastopol, CA</subfield><subfield code="b">O'Reilly Media, Inc.</subfield><subfield code="c">2024</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (396 Seiten)</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly reduced today, practically any developer can harness LLMs and diffusion models to tackle problems previously unsuitable for automation. With this book, you'll gain a solid foundation in generative AI, including how to apply these models in practice. When first integrating LLMs and diffusion models into their workflows, most developers struggle to coax reliable enough results from them to use in automated systems. Authors James Phoenix and Mike Taylor show you how a set of principles called prompt engineering can enable you to work effectively with AI.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Natural language generation (Computer science)</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Artificial intelligence</subfield><subfield code="x">Computer programs</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Natural language processing (Computer science)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Génération automatique de texte</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Intelligence artificielle ; Logiciels</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Traitement automatique des langues naturelles</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Taylor, Mike</subfield><subfield code="e">VerfasserIn</subfield><subfield code="4">aut</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-91</subfield><subfield code="p">ZDB-30-ORH</subfield><subfield code="q">TUM_PDA_ORH</subfield><subfield code="u">https://learning.oreilly.com/library/view/-/9781098153427/?ar</subfield><subfield code="m">X:ORHE</subfield><subfield code="x">Aggregator</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">BO</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91</subfield></datafield></record></collection> |
id | ZDB-30-ORH-100860095 |
illustrated | Illustrated |
indexdate | 2025-01-17T11:22:16Z |
institution | BVB |
language | English |
open_access_boolean | |
owner | DE-91 DE-BY-TUM |
owner_facet | DE-91 DE-BY-TUM |
physical | 1 Online-Ressource (396 Seiten) illustrations |
psigel | ZDB-30-ORH TUM_PDA_ORH ZDB-30-ORH |
publishDate | 2024 |
publishDateSearch | 2024 |
publishDateSort | 2024 |
publisher | O'Reilly Media, Inc. |
record_format | marc |
spelling | Phoenix, James VerfasserIn aut Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale James Phoenix and Mike Taylor First edition. Sebastopol, CA O'Reilly Media, Inc. 2024 1 Online-Ressource (396 Seiten) illustrations Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly reduced today, practically any developer can harness LLMs and diffusion models to tackle problems previously unsuitable for automation. With this book, you'll gain a solid foundation in generative AI, including how to apply these models in practice. When first integrating LLMs and diffusion models into their workflows, most developers struggle to coax reliable enough results from them to use in automated systems. Authors James Phoenix and Mike Taylor show you how a set of principles called prompt engineering can enable you to work effectively with AI. Natural language generation (Computer science) Artificial intelligence Computer programs Natural language processing (Computer science) Génération automatique de texte Intelligence artificielle ; Logiciels Traitement automatique des langues naturelles Taylor, Mike VerfasserIn aut |
spellingShingle | Phoenix, James Taylor, Mike Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale Natural language generation (Computer science) Artificial intelligence Computer programs Natural language processing (Computer science) Génération automatique de texte Intelligence artificielle ; Logiciels Traitement automatique des langues naturelles |
title | Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale |
title_auth | Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale |
title_exact_search | Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale |
title_full | Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale James Phoenix and Mike Taylor |
title_fullStr | Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale James Phoenix and Mike Taylor |
title_full_unstemmed | Prompt engineering for generative AI future-proof inputs for reliable AI outputs at scale James Phoenix and Mike Taylor |
title_short | Prompt engineering for generative AI |
title_sort | prompt engineering for generative ai future proof inputs for reliable ai outputs at scale |
title_sub | future-proof inputs for reliable AI outputs at scale |
topic | Natural language generation (Computer science) Artificial intelligence Computer programs Natural language processing (Computer science) Génération automatique de texte Intelligence artificielle ; Logiciels Traitement automatique des langues naturelles |
topic_facet | Natural language generation (Computer science) Artificial intelligence Computer programs Natural language processing (Computer science) Génération automatique de texte Intelligence artificielle ; Logiciels Traitement automatique des langues naturelles |
work_keys_str_mv | AT phoenixjames promptengineeringforgenerativeaifutureproofinputsforreliableaioutputsatscale AT taylormike promptengineeringforgenerativeaifutureproofinputsforreliableaioutputsatscale |