Artificial intelligence and hardware accelerators / Ashutosh Mishra, Jaekwang Cha, Hyunbin Park, Shiho Kim, editors.

This book explores new methods, architectures, tools, and algorithms for Artificial Intelligence Hardware Accelerators. The authors have structured the material to simplify readers journey toward understanding the aspects of designing hardware accelerators, complex AI algorithms, and their computati...

Full description

Saved in:
Bibliographic Details
Other Authors: Mishra, Ashutosh, Cha, Jaekwang, Park, Hyunbin, Kim, Shiho
Format: eBook
Language:English
Published: Cham : Springer, 2023.
Subjects:
Online Access:Click for online access

MARC

LEADER 00000cam a22000007i 4500
001 on1373344103
003 OCoLC
005 20240909213021.0
006 m o d
007 cr cnu---unuuu
008 230325s2023 sz o 000 0 eng d
040 |a EBLCP  |b eng  |e rda  |c EBLCP  |d GW5XE  |d YDX  |d EBLCP  |d UKAHL  |d OCLCF  |d YDX  |d N$T  |d OCLCO 
019 |a 1373337554 
020 |a 3031221702  |q electronic book 
020 |a 9783031221705  |q (electronic bk.) 
020 |z 9783031221699 
020 |z 3031221699 
024 7 |a 10.1007/978-3-031-22170-5  |2 doi 
035 |a (OCoLC)1373344103  |z (OCoLC)1373337554 
050 4 |a Q335  |b .A78 2023 
072 7 |a TJFC  |2 bicssc 
072 7 |a TEC008010  |2 bisacsh 
072 7 |a TJFC  |2 thema 
049 |a HCDD 
245 0 0 |a Artificial intelligence and hardware accelerators /  |c Ashutosh Mishra, Jaekwang Cha, Hyunbin Park, Shiho Kim, editors. 
264 1 |a Cham :  |b Springer,  |c 2023. 
300 |a 1 online resource (358 p.) 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
505 0 |a Intro -- Preface -- Contents -- Artificial Intelligence Accelerators -- 1 Introduction -- 1.1 Introduction to Artificial Intelligence (AI) -- 1.1.1 AI Applications -- 1.1.2 AI Algorithms -- 1.2 Hardware Accelerators -- 2 Requirements of AI Accelerators -- 2.1 Hardware Accelerator Designs -- 2.2 Domain-Specific Accelerators -- 2.3 Performance Metrics in Accelerators -- 2.3.1 Instructions Per Second (IPS) -- 2.3.2 Floating Point Operations Per Second (FLOPS, flops, or flop/s) -- 2.3.3 Trillion/Tera of Operations Per Second (TOPS) -- 2.3.4 Throughput Per Cost (Throughput/) 
505 8 |a 2.4 Key Metrics and Design Objectives -- 3 Classifications of AI Accelerators -- 4 Organization of this Book -- 5 Popular Design Approaches in AI Acceleration -- 6 Bottleneck of AI Accelerator and In-Memory Processing -- 7 A Few State-of-the-Art AI Accelerators -- 8 Conclusions -- References -- AI Accelerators for Standalone Computer -- 1 Introduction to Standalone Compute -- 2 Hardware Accelerators for Standalone Compute -- 2.1 Inference and Training of DNNs -- 2.2 Accelerating DNN Computation -- 2.3 Considerations in Hardware Design -- 2.4 Deep Learning Frameworks 
505 8 |a 3 Hardware Accelerators in GPU -- 3.1 History and Overview -- 3.2 GPU Architecture -- 3.3 GPU Acceleration Techniques -- 3.4 CUDA-Related Libraries -- 4 Hardware Accelerators in NPU -- 4.1 History and Overview: Hardware -- 4.2 Standalone Accelerating System Characteristics -- 4.3 Architectures of Hardware Accelerator in NPU -- 4.4 SOTA Architectures -- 5 Summary -- References -- AI Accelerators for Cloud and Server Applications -- 1 Introduction -- 2 Background -- 3 Hardware Accelerators in Clouds -- 4 Hardware Accelerators in Data Centers -- 4.1 Design of HW Accelerator for Data Centers 
505 8 |a 4.1.1 Batch Processing Applications -- 4.1.2 Streaming Processing Applications -- 4.2 Design Consideration for HW Accelerators in the Data Center -- 4.2.1 HW Accelerator Architecture -- 4.2.2 Programmable HW Accelerators -- 4.2.3 AI Design Ecosystem -- 4.2.4 Hardware Accelerator IPs -- 4.2.5 Energy and Power Efficiency -- 5 Heterogeneous Parallel Architectures in Data Centers and Cloud -- 5.1 Heterogeneous Computing Architectures in Data Centers and Cloud -- 6 Hardware Accelerators for Distributed In-Network and Edge Computing -- 6.1 HW Accelerator Model for In-Network Computing 
505 8 |a 6.2 HW Accelerator Model for Edge Computing -- 7 Infrastructure for Deploying FPGAs -- 8 Infrastructure for Deploying ASIC -- 8.1 Tensor Processing Unit (TPU) Accelerators -- 8.2 Cloud TPU -- 8.3 Edge TPU -- 9 SOTA Architectures for Cloud and Edge -- 9.1 Advances in Cloud and Edge Accelerator -- 9.1.1 Cloud TPU System Architecture -- 9.1.2 Cloud TPU VM Architecture -- 9.2 Staggering Cost of Training SOTA AI Models -- 10 Security and Privacy Issues -- 11 Summary -- References -- Overviewing AI-Dedicated Hardware for On-Device AI in Smartphones -- 1 Introduction 
500 |a 2 Overview of HW Development to Achieve On-Device AI in a Smartphone 
520 |a This book explores new methods, architectures, tools, and algorithms for Artificial Intelligence Hardware Accelerators. The authors have structured the material to simplify readers journey toward understanding the aspects of designing hardware accelerators, complex AI algorithms, and their computational requirements, along with the multifaceted applications. Coverage focuses broadly on the hardware aspects of training, inference, mobile devices, and autonomous vehicles (AVs) based AI accelerators. 
588 0 |a Online resource; title from PDF title page (SpringerLink, viewed March 27, 2023). 
650 0 |a Artificial intelligence. 
650 0 |a Computers. 
650 7 |a artificial intelligence.  |2 aat 
650 7 |a computers.  |2 aat 
650 7 |a Artificial intelligence  |2 fast 
650 7 |a Computers  |2 fast 
700 1 |a Mishra, Ashutosh. 
700 1 |a Cha, Jaekwang. 
700 1 |a Park, Hyunbin. 
700 1 |a Kim, Shiho. 
776 0 8 |i Print version:  |a Mishra, Ashutosh  |t Artificial Intelligence and Hardware Accelerators  |d Cham : Springer International Publishing AG,c2023  |z 9783031221699 
856 4 0 |u https://holycross.idm.oclc.org/login?auth=cas&url=https://link.springer.com/10.1007/978-3-031-22170-5  |y Click for online access 
903 |a SPRING-ALL2023 
994 |a 92  |b HCD