Computer Organization and Design, 5th Edition

The Hardware/Software Interface

 
Computer Organization and Design, 5th Edition,David Patterson,John Hennessy,ISBN9780124077263
 
 
 

  &      

Morgan Kaufmann

9780124077263

9780124078864

800

235 X 191

The classic introduction to computer organization now updated for mobile computing and the cloud!

Print Book + eBook

USD 107.94
USD 179.90

Buy both together and save 40%

Print Book

Paperback

In Stock

Estimated Delivery Time
USD 89.95

eBook
eBook Overview

VST format:

DRM Free included formats: EPub, Mobi, PDF

USD 67.46
USD 89.95
Add to Cart
 
 

Key Features

  • Winner of a 2014 Texty Award from the Text and Academic Authors Association
  • Includes new examples, exercises, and material highlighting the emergence of mobile computing and the cloud
  • Covers parallelism in depth with examples and content highlighting parallel hardware and software topics
  • Features the Intel Core i7, ARM Cortex-A8 and NVIDIA Fermi GPU as real-world examples throughout the book
  • Adds a new concrete example, "Going Faster," to demonstrate how understanding hardware can inspire software optimizations that improve performance by 200 times
  • Discusses and highlights the "Eight Great Ideas" of computer architecture:  Performance via Parallelism; Performance via Pipelining; Performance via Prediction; Design for Moore's Law; Hierarchy of Memories; Abstraction to Simplify Design; Make the Common Case Fast;  and Dependability via Redundancy
  • Includes a full set of updated and improved exercises

Description

The fifth edition of Computer Organization and Design-winner of a 2014 Textbook Excellence Award (Texty) from The Text and Academic Authors Association-moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures.

Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture.

As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O.

Instructors looking for fourth edition teaching materials should e-mail textbook@elsevier.com.

Readership

Professional digital system designers, programmers, application developers, and system software developers.

Undergraduate students in Computer Science, Computer Engineering and Electrical Engineering courses in Computer Organization, Computer Design, ranging from Sophomore required courses to Senior Electives

David Patterson

David A. Patterson has been teaching computer architecture at the University of California, Berkeley, since joining the faculty in 1977, where he holds the Pardee Chair of Computer Science. His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM and CRA.

Affiliations and Expertise

Pardee Professor of Computer Science, University of California, Berkeley, USA

View additional works by David A. Patterson

John Hennessy

John L. Hennessy is the tenth president of Stanford University, where he has been a member of the faculty since 1977 in the departments of electrical engineering and computer science. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Science, and the American Philosophical Society; and a Fellow of the American Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neumann Award, which he shared with David Patterson. He has also received seven honorary doctorates.

Affiliations and Expertise

President, Stanford University, Palo Alto, CA, USA

View additional works by John L. Hennessy

Computer Organization and Design, 5th Edition

1 Computer Abstractions and Technology
1.1 Introduction
1.2 Eight Great Ideas in Computer Architecture
1.3 Below Your Program
1.4 Under the Covers
1.5 Technologies for Building Processors and Memory
1.6 Performance
1.7 The Power Wall
1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors
1.9 Real Stuff: Benchmarking the Intel Core i7
1.10 Fallacies and Pitfalls
1.11 Concluding Remarks
1.12 Historical Perspective and Further Reading
1.13 Exercises

2 Instructions: Language of the Computer
2.1 Introduction
2.2 Operations of the Computer Hardware
2.3 Operands of the Computer Hardware
2.4 Signed and Unsigned Numbers
2.5 Representing Instructions in theComputer
2.6 Logical Operations
2.7 Instructions for Making Decisions
2.8 Supporting Procedures in Computer Hardware
2.9 Communicating with People
2.10 MIPS Addressing for 32-Bit Immediates and Addresses
2.11 Parallelism and Instructions: Synchronization
2.12 Translating and Starting a Program
2.13 A C Sort Example to Put It All Together
2.14 Arrays versus Pointers
2.15 Advanced Material: Compiling C and Interpreting Java
2.16 Real Stuff: ARM v7 (32-bit) Instructions
2.17 Real Stuff: x86 Instructions
2.18 Real Stuff: ARM v8 (64-bit) Instructions
2.19 Fallacies and Pitfalls
2.20 Concluding Remarks
2.21 Historical Perspective and Further Reading
2.22 Exercises

3 Arithmetic for Computers 
3.1 Introduction
3.2 Addition and Subtraction
3.3 Multiplication
3.4 Division
3.5 Floating Point
3.6 Parallelism and Computer Arithmetic: Subword Parallelism
3.7 Real Stuff: x86 Streaming SIMD Extensions and Advanced Vector Extensions
3.8 Going Faster: Subword Parallelism and Matrix Multiply
3.9 Fallacies and Pitfalls
3.10 Concluding Remarks
3.11 Historical Perspective and Further Reading
3.12 Exercises

4 The Processor
4.1 Introduction
4.2 Logic Design Conventions
4.3 Building a Datapath
4.4 A Simple Implementation Scheme
4.5 An Overview of Pipelining
4.6 Pipelined Datapath and Control
4.7 Data Hazards: Forwarding versus Stalling
4.8 Control Hazards
4.9 Exceptions
4.10 Parallelism via Instructions
4.11 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Pipelines
4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply
4.13 Advanced Topic: an Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and More Pipelining Illustrations
4.14 Fallacies and Pitfalls
4.15 Concluding Remarks
4.16 Historical Perspective and Further Reading
4.17 Exercises XXX

5 Large and Fast: Exploiting Memory Hierarchy
5.1 Introduction
5.2 Memory Technologies
5.3 The Basics of Caches
5.4 Measuring and Improving Cache Performance
5.5 Dependable Memory
5.6 Virtual Machines
5.7 Virtual Memory
5.8 A Common Framework for Memory Hierarchy
5.9 Using a Finite-State Machine to Control a Simple Cache
5.10 Parallelism and Memory Hierarchies: Cache Coherence
5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks
5.12 Advanced Material: Implementing Cache Controllers
5.13 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies
5.14 Going Faster: Cache Blocking and Matrix Multiply
5.15 Fallacies and Pitfalls
5.16 Concluding Remarks
5.17 Historical Perspective and Further Reading
5.18 Exercises

6 Parallel Processors from Client to Cloud
6.1 Introduction
6.2 The Difficulty of Creating Parallel Processing Programs
6.3 SISD, MIMD, SIMD, SPMD, and Vector
6.4 Hardware Multithreading
6.5 Multicore and Other Shared Memory Multiprocessors
6.6 Introduction to Graphics Processing Units
6.7 Clusters and Other Message-Passing Multiprocessors
6.8 Introduction to Multiprocessor Network Topologies
6.9 Communicating to the Outside World: Cluster Networking
6.10 Multiprocessor Benchmarks and Performance Models
6.11 Real Stuff: Benchmarking Intel Core i7 versus NVIDIA Fermi GPU
6.12 Going Faster: Multiple Processors and Matrix Multiply
6.13 Fallacies and Pitfalls
6.14 Concluding Remarks
6.15 Historical Perspective and Further Reading
6.16 Exercises

APPENDICES
A Assemblers, Linkers, and the SPIM Simulator
A.1 Introduction A-3
A.2 Assemblers A-10
A.3 Linkers A-18
A.4 Loading A-19
A.5 Memory Usage A-20
A.6 Procedure Call Convention A-22
A.7 Exceptions and Interrupts A-33
A.8 Input and Output A-38
A.9 SPIM A-40
A.10 MIPS R2000 Assembly Language A-45
A.11 Concluding Remarks A-81
A.12 Exercises A-82

B The Basics of Logic Design
B.1 Introduction B-3
B.2 Gates, Truth Tables, and Logic Equations B-4
B.3 Combinational Logic B-9
B.4 Using a Hardware Description Language B-20
B.5 Constructing a Basic Arithmetic Logic Unit B-26
B.6 Faster Addition: Carry Lookahead B-38
B.7 Clocks B-48
B.8 Memory Elements: Flip-Flops, Latches, and Registers B-50
B.9 Memory Elements: SRAMs and DRAMs B-58
B.10 Finite-State Machines B-67
B.11 Timing Methodologies B-72
B.12 Field Programmable Devices B-78
B.13 Concluding Remarks B-79
B.14 Exercises B-80

ONLINE  CONTENT
C Graphics and Computing GPUs
C.1 Introduction C-3
C.2 GPU System Architectures C-7
C.3 Programming GPUs C-12
C.4 Multithreaded Multiprocessor Architecture C-25
C.5 Parallel Memory System C-36
C.6 Floating Point Arithmetic C-41
C.7 Real Stuff: The NVIDIA GeForce 8800 C-46
C.8 Real Stuff: Mapping Applications to GPUs C-55
C.9 Fallacies and Pitfalls C-72
C.10 Concluding Remarks C-76
C.11 Historical Perspective and Further Reading C-77

D Mapping Control to Hardware
D.1 Introduction D-3
D.2 Implementing Combinational Control Units D-4
D.3 Implementing Finite-State Machine Control D-8
D.4 Implementing the Next-State Function with a Sequencer D-22
D.5 Translating a Microprogram to Hardware D-28
D.6 Concluding Remarks D-32
D.7 Exercises D-33

E A Survey of RISC Architectures for Desktop, Server, and Embedded Computers
E.1 Introduction E-3
E.2 Addressing Modes and Instruction Formats E-5
E.3 Instructions: The MIPS Core Subset E-9
E.4 Instructions: Multimedia Extensions of theDesktop/Server RISCs E-16
E.5 Instructions: Digital Signal-Processing Extensions of the Embedded RISCs E-19
E.6 Instructions: Common Extensions to MIPS Core E-20
E.7 Instructions Unique to MIPS-64 E-25
E.8 Instructions Unique to Alpha E-27
E.9 Instructions Unique to SPARC v.9 E-29
E.10 Instructions Unique to PowerPC E-32
E.11 Instructions Unique to PA-RISC 2.0 E-34
E.12 Instructions Unique to ARM E-36
E.13 Instructions Unique to Thumb E-38
E.14 Instructions Unique to SuperH E-39
E.15 Instructions Unique to M32R E-40
E.16 Instructions Unique to MIPS-16 E-40
E.17 Concluding Remarks E-43


 

Quotes and reviews

"...the fundamental computer organization book, both as an introduction for readers with no experience in computer architecture topics, and as an up-to-date reference for computer architects."--Computing Reviews, July 22 2014

 
 
Cyber Monday SALE Upto 50 Percent OFF | Use Code CYBER14
Shop with Confidence

Free Shipping around the world
▪ Broad range of products
▪ 30 days return policy
FAQ

Contact Us